GFS restore points in capacity tier should be immutable as long as GFS retention is configured. Does this correspond to your understanding?
https://helpcenter.veeam.com/docs/backup/vsphere/immutability_capacity_tier.html?ver=110
I believe the confusion here is the short term immutability due to what seems to be an immediate copy to the SOBR with a 14 day immutability configured, which means 14 day immutability for the GFS backups too, according to Veeam documentation:
Backups are immutable only during the immutability period set in the bucket even if their retention policy allows for longer storage. This also applies to the following types of backups:
- Backups created with VeeamZIP jobs
sorry, my mistake … had confused myself
Also I was going through some V12 content and noticed it specifically mentioned that GFS will be protected with immutability for its entire retention period bring on v12!!!
Hi @Stabz!
That’s correct, you’d have used the S3-Compatible options to add your Wasabi object storage, so at this stage (https://helpcenter.veeam.com/docs/backup/vsphere/compatible_storage_details.html?ver=110) in the configuration, you’d have defined your immutability period.
To achieve what you want, yes you’d want immutability in the archive tier with your GFS backups destaged to archive tier, with AWS Glacier archiving as the only option for this. As only AWS S3 Glacier & Azure Archive tier are supported for archive tier in a SOBR atm, and of those two, only AWS S3 Glacier supports immutability (in a compatible way with Veeam).
This is an unfortunate complication of using SOBR for both copy & move, you can only set the immutability policy optimised for one or the other. In v12 this will change, as you can do backup & backup copy jobs to object, so you could choose to do a backup copy from your SOBR to a short-term immutable for your 14 day retention immediately (to meet off-site and immutable compliance), with SOBR capacity tier offload moving old backups to another object storage repository, configured for an appropriate immutability length (you could also used hardened repo here for short term immutability on your primary storage too).
Hopefully this helps!
GFS restore points in capacity tier should be immutable as long as GFS retention is configured. Does this correspond to your understanding?
https://helpcenter.veeam.com/docs/backup/vsphere/immutability_capacity_tier.html?ver=110
I believe the confusion here is the short term immutability due to what seems to be an immediate copy to the SOBR with a 14 day immutability configured, which means 14 day immutability for the GFS backups too, according to Veeam documentation:
Backups are immutable only during the immutability period set in the bucket even if their retention policy allows for longer storage. This also applies to the following types of backups:
- Backups created with VeeamZIP jobs
For my customers, they are using 30 or 90 days immutability. GFS retention is keeping much more in most cases up to 1 or 3 years for some depending on how much they are willing to pay, but that’s more of archival of data rather than data recovery. Cost is the primary reason a lot are using object storage over VCC in most cases….they can keep more data for cheaper with object, and they want immutability now. The immutability period is more from a recovery standpoint - assuming they are immutable for 90 days, then that data can’t be touched for 90 days.
If I had clients looking for more of an archival function, then we’d be talking about Glacier with immutability or possibly tapes, etc. I have one client that recently started to discern between archiving and recovering data. They have archival VM’s that keep 7 years of data, and those VM’s are backed up to tape and rotated yearly, so at any given time, they can go back nearly 8 years if needed. They were going to try and keep the VM’s for 7 years, but quickly realized that it’s going to take a fair amount of space, and they didn’t need to go back 14 years.
Most of my clients are focused on recovering and don’t have as stringent archival requirements, if at all.
Also I was going through some V12 content and noticed it specifically mentioned that GFS will be protected with immutability for its entire retention period bring on v12!!!
I was not aware of this, but this is excellent news.
GFS restore points in capacity tier should be immutable as long as GFS retention is configured. Does this correspond to your understanding?
https://helpcenter.veeam.com/docs/backup/vsphere/immutability_capacity_tier.html?ver=110
@vNote42 be carefull with a capacity tier the Backups are immutable only during the immutability period set in the bucket even if their retention policy allows for longer storage.
So GFS points could be deleted without problem...
@MicoolPaul Thanks for the clarification, which confirms my issue… Is there a difference with copy/move options and immutability in a SOBR.
I am waiting for v12 to avoid this kind of problem, many of my customers want to set different retention and protection in S3 which is not possible today in a SOBR.
Thinking purely in the realm of possibility, not practicality, if you have two SOBRs, you could use a backup copy from one to the other, and have one SOBR for immediate copy to cloud with short-term object lock and then the other with long term object lock with move to object.
Other ways you could achieve similar would be to use immutable on local hardened with move to object configured and long term object lock.
We’ve discussed the moving GFS to archive tier as well. Otherwise as you say that you’ve got many customers, if you’re providing VCC you could layer some of this up in backup copies to VCC doing the heavy lifting, because copying to VCC often means sending off-site anyway, so immutability on a hardened repo (or even using insider threat protection) could give you the short-term immutability, with your own offload to object (or even tape!).
Don’t know if this helps with any further ideas.
Hi @Stabz
I could’ve worded it better, I assume it’s this line:
if you have two SOBRs, you could use a backup copy from one to the other.
In which case I meant backup to one SOBR, then BCJ to another
Hi @Stabz
I could’ve worded it better, I assume it’s this line:
if you have two SOBRs, you could use a backup copy from one to the other.
In which case I meant backup to one SOBR, then BCJ to another
Ok we are aligned
Thinking purely in the realm of possibility, not practicality, if you have two SOBRs, you could use a backup copy from one to the other, and have one SOBR for immediate copy to cloud with short-term object lock and then the other with long term object lock with move to object.
Other ways you could achieve similar would be to use immutable on local hardened with move to object configured and long term object lock.
We’ve discussed the moving GFS to archive tier as well. Otherwise as you say that you’ve got many customers, if you’re providing VCC you could layer some of this up in backup copies to VCC doing the heavy lifting, because copying to VCC often means sending off-site anyway, so immutability on a hardened repo (or even using insider threat protection) could give you the short-term immutability, with your own offload to object (or even tape!).
Don’t know if this helps with any further ideas.
Hello @MicoolPaul
Maybe I didn’t follow your idea, but you said to configure a backup copy from another backup copy ? I dont think it ‘s possible.
VCC could be a solution right, but we didn’t implement hardened repository yet. Insider protection is a good option but it’s not replacing immutability. I have a doubt but if someone changes the retention in the backup copy job to the VCC all files could be deleted without problem?