Hi @cameronmcshane,
Makes sense now thank you.
Here’s what I would do:
Create 2x buckets or 1x bucket and 2x folders within the bucket within Wasabi
I would add one bucket to a SOBR that you add your local backup into, as you have been stating you wish to do. I’d instruct this to copy backups when they’re created, this will get you all of your new backups uploaded to the cloud immediately, and start your new intended data protection strategy. If you wish to have a difference in when backups are removed from capacity tier vs performance tier, I’d configure the move section of the capacity tier as well to remove the backups from your performance tier after X days. (Tip: You set the backup job retention policy to whatever you’d like to keep in your capacity tier, and the move section I mentioned above is when it disappears from performance tier to solely reside on capacity tier).
I would then disable your backup copy job, so your remote site has static data.
I would then add the repository at the remote site into a SOBR with the second Wasabi Folder/bucket as a capacity tier, I would configure this to move after 0 days, this will allow everything within the performance tier to be offloaded, this will take a while.
You keep your retention policy the same for the backup copy job so that your backups once uploaded to Wasabi will retain their intended duration, then get deleted.
Once you’ve seen through this lifecycle, you purge the second bucket/folder used at the remote site.
Just remember:
You’ll have 2x chains in object storage as they’ll exist in separate spaces therefore they’ll consume twice the storage temporarily.
Wasabi typically has a 90 day retention, so factor this in and any early delete fees when planning your retention.
Hope this helps!