Solved

migrating backups to new object storage bucket

  • 16 September 2022
  • 6 comments
  • 103 views

Userlevel 1

Hello Veeam community!

I have been trying to determine best approach for a backup migration from one object storage bucket to another and need some help.  My customer has a large amount of data in the 100’s of TB’s of backups and a bucket copy will take 30 days which is too long.  They also don’t have the space to pull it all back down locally to copy to the new bucket so that option won't work.  I am going to have them setup a new capacity tier in SOBR and start new jobs going to the new bucket.  But the issue is the old backups on the old bucket are left orphaned.  i need to move some but not all of the backups due to retention policy.  I was trying to determine which backup objects are related to which backup jobs but can’t determine backup_id via powershell.  my thinking is this would allow me to “grab” the backups that need to be moved to the new bucket.  

Does anyone have a helpful tip on how to get this accomplished?  Thank you!

icon

Best answer by MicoolPaul 19 September 2022, 15:57

View original

6 comments

Userlevel 7
Badge +6

VeeaMover when v12 is released?  I don’t have much of another answer besides letting the data sit in the old bucket and copying out data to the new bucket from the performance tier if it still exists there, or downloading what you can and uploading it.

Userlevel 7
Badge +20

VeeaMover when v12 is released?  I don’t have much of another answer besides letting the data sit in the old bucket and copying out data to the new bucket from the performance tier if it still exists there, or downloading what you can and uploading it.

This is probably your best bet is to wait on v12 unfortunately or you are going to need to copy it manually.  Currently there is no facility in Veeam to assist with this.

Userlevel 7
Badge +20

You won’t like the answer, but it is as you expect, prior to v12, the supported way is to download backups and then offload them to a new capacity tier:

https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier_migrating.html?ver=110
 

Depending on the capacity tier retention can you just reconfigure to your new bucket, letting it upload new backups and let the old ones age out then purge the old bucket?

Userlevel 1

Thank you for the responses.  One follow on question related to this article - https://helpcenter.veeam.com/docs/backup/vsphere/object_storage_repository_structure.html?ver=110

 

How do I find the <backup_id>?  It would seem that if i can find the job and related backup_id I could then move the required objects to the new bucket.

Userlevel 7
Badge +20

Thank you for the responses.  One follow on question related to this article - https://helpcenter.veeam.com/docs/backup/vsphere/object_storage_repository_structure.html?ver=110

 

How do I find the <backup_id>?  It would seem that if i can find the job and related backup_id I could then move the required objects to the new bucket.

Your metadata would likely be referencing your old bucket/container somewhere.

I know you’re looking for a way to avoid the whole reprocessing, but I feel the need to call out that you are going very far off-piste from a support perspective. As you’re talking about this being a customer’s backups, I’d be cautious that if it all went wrong and Veeam said you’d done something unsupported and in fixable, I wouldn’t want to be in that position.

 

Why not consider leveraging a cloud provider for some temporary compute & storage to process this data. If you could sit the compute in the same cloud as your storage, you’d get extremely fast processing of this data, it comes at a cost of course, but the task you’re speaking of carrying out will likely result in double storage & API calls being billed temporarily anyway, so might as well do it safely.

 

Please don’t take my above warning the wrong way, I want to help and find a suitable way forward that doesn’t cause you unnecessary risk 🙂

Userlevel 1

That makes very good sense.  Thank you.

Comment