Solved

Migrating from repository with copy job to SOBR


Userlevel 1

Hi Veeam Community!

We are looking to migrate our backups from a standard repository to an SOBR in order to utilise Wasabi storage for capacity tier. We current have backups jobs storing to local storage repository with 30 day retention, with copy jobs to a another backup repository at one of our other sites for offsite/archival backup purposes. Local repository is around 30TB and ‘offsite/archive’ repository is around 100TB. 

Ideally we would like to move the primary repository to performance tier of an SOBR and remote repository to capacity tier. 

Is this a supported migration, or will the backup copy job part cause us issues? Ideally we would prefer not to have to create new jobs, and maintain backup history. We are running B&R V12. 

Any insight or tips on how to achieve the above would be greatly appreciated.

Thanks!

Cameron

 

icon

Best answer by MicoolPaul 26 June 2023, 10:19

View original

7 comments

Userlevel 1

Thanks everyone for your comments and help with this. @MicoolPaul, the setup you have suggested sounds exactly what we are looking for, really appreciate your detailed response. @dloseke, thanks for the Wasabi RCS tip, will definitely be speaking with our account manager about this!

Userlevel 7
Badge +6

Hi @cameronmcshane,

 

Makes sense now thank you.

 

Here’s what I would do:

Create 2x buckets or 1x bucket and 2x folders within the bucket within Wasabi

I would add one bucket to a SOBR that you add your local backup into, as you have been stating you wish to do. I’d instruct this to copy backups when they’re created, this will get you all of your new backups uploaded to the cloud immediately, and start your new intended data protection strategy. If you wish to have a difference in when backups are removed from capacity tier vs performance tier, I’d configure the move section of the capacity tier as well to remove the backups from your performance tier after X days. (Tip: You set the backup job retention policy to whatever you’d like to keep in your capacity tier, and the move section I mentioned above is when it disappears from performance tier to solely reside on capacity tier).

 

This is precisely what I was thinking.  Two buckets, although two folders in the same bucket should be completely fine and probably simpler since they’re all managed by the same VBR server.

 

Just remember:

You’ll have 2x chains in object storage as they’ll exist in separate spaces therefore they’ll consume twice the storage temporarily.

Wasabi typically has a 90 day retention, so factor this in and any early delete fees when planning your retention.

I will say that it’s a 90 day retention when using Pay-as-you-go (PAYG), but given the size of data we’re talking about, Reserved Capacity Storage (RCS) may be acceptable as that starts out at 50TB.  Note that RCS has a 30 day retention policy last I checked.  Even with PAYG, you can contact support and they should be able to change the 90 day retention to 30 days when noting that the object storage is being used as a Veeam capacity tier.

Userlevel 7
Badge +22

Hi @cameronmcshane,

 

Makes sense now thank you.

 

Here’s what I would do:

Create 2x buckets or 1x bucket and 2x folders within the bucket within Wasabi

I would add one bucket to a SOBR that you add your local backup into, as you have been stating you wish to do. I’d instruct this to copy backups when they’re created, this will get you all of your new backups uploaded to the cloud immediately, and start your new intended data protection strategy. If you wish to have a difference in when backups are removed from capacity tier vs performance tier, I’d configure the move section of the capacity tier as well to remove the backups from your performance tier after X days. (Tip: You set the backup job retention policy to whatever you’d like to keep in your capacity tier, and the move section I mentioned above is when it disappears from performance tier to solely reside on capacity tier).

 

I would then disable your backup copy job, so your remote site has static data.

I would then add the repository at the remote site into a SOBR with the second Wasabi Folder/bucket as a capacity tier, I would configure this to move after 0 days, this will allow everything within the performance tier to be offloaded, this will take a while.

 

You keep your retention policy the same for the backup copy job so that your backups once uploaded to Wasabi will retain their intended duration, then get deleted.

 

Once you’ve seen through this lifecycle, you purge the second bucket/folder used at the remote site.

 

Just remember:

You’ll have 2x chains in object storage as they’ll exist in separate spaces therefore they’ll consume twice the storage temporarily.

Wasabi typically has a 90 day retention, so factor this in and any early delete fees when planning your retention.

 

Hope this helps!

Userlevel 7
Badge +21

Michael

Thanks for your prompt response. Sorry for the confusion. What we are trying to achieve is to remove the backup copy job and its repository and replace it with Wasabi, but would like to retain the historical backups. So I guess the steps (if possible) would be:

  1. Add existing local repository to new SOBR as performance tier
  2. Add Wasabi storage as capacity tier
  3. Somehow copy/move current historical backups to Wasabi/capacity tier. (ideally without having to be ‘pulled back’ to performance tier then offloaded to capacity tier, due to size of data - 100TB). 

The outcome we are looking for is to decommission our existing on-prem storage for historical backups and replace with Wasabi

I hope the above clarifies what we are trying to do?


Cheers

 

Cameron

 

There is no direct way to send copy jobs to capacity tier.  You need to move the current repo to a SOBR then add Wasabi to capacity tier and let it offload the data.

Userlevel 1

Michael

Thanks for your prompt response. Sorry for the confusion. What we are trying to achieve is to remove the backup copy job and its repository and replace it with Wasabi, but would like to retain the historical backups. So I guess the steps (if possible) would be:

  1. Add existing local repository to new SOBR as performance tier
  2. Add Wasabi storage as capacity tier
  3. Somehow copy/move current historical backups to Wasabi/capacity tier. (ideally without having to be ‘pulled back’ to performance tier then offloaded to capacity tier, due to size of data - 100TB). 

The outcome we are looking for is to decommission our existing on-prem storage for historical backups and replace with Wasabi

I hope the above clarifies what we are trying to do?


Cheers

 

Cameron

 

Userlevel 7
Badge +21

Moving the local repository would be fairly straightforward using the new move backup feature.

Moving the remote repository as far as I know you cannot move anything directly to capacity tier without it going to the performance tier then being offloaded to capacity tier.

You can move backups using v12.

Userlevel 7
Badge +22

Hi,

 

Little bit of confusion around wanting Wasabi as capacity tier and your remote site as capacity tier, so going to focus on what’s possible:

 

  • You can create a SOBR and add your local repository as a performance tier to it, and then add your Wasabi object storage as a capacity tier. You can then choose to either “copy” backups immediately to object storage via copy to capacity tier, or “move” backups to capacity tier after X days age. You can also enable both settings to copy immediately to capacity tier and just purge off the performance tier copy after X days.
  • Your remote storage can’t become a capacity tier unless it’s object storage, so I have two questions at this point:
  1. Were you trying to add remote storage to capacity tier to negate having a backup copy job?
  2. Were you wanting to upload all of your backup copy historical retention to capacity tier?

Just making sure I understand the end goal so I can try to help you best.

Comment