Knuppel wrote:
We have an onpremise Veeam server that runs the following jobs:
1. Backup to local storage (weekly full, daily incrementals)
2. Copy job to Linux hardened repository (immutable)
3. Copy job to Azure Blob cool storage (immutable)
Now i would like to have the Azure monthly backups moved to archive tier. I'm assuming i have to use SOBR for this. The following question arise:
- i create a second immutable repository with longer immutability, say 1 year. After moving the full backup, will it calculate ‘initial backup date + 1 year’, or ‘SOBR copy date + 1 year’?
- do i need a new container for this or can i just create a folder within a container?
- if a backup is still immutable can it be moved to the archive tier?
- copying backups obviously generates a large amount of traffic. Download backup from one repository, upload to another. Is there something i can do to limit the amount of traffic and costs?
- what's the risk here? To move data the local Veeam server needs to have modify permissions in both the cool and archive repository. Theoretically a hacker with access to the backup server could stop it copying to archive repository, but still delete from cool repository?
Correct you must use a SOBR as you can’t backup directly to archive tier. If your performance tier within a SOBR is object storage, you can skip capacity tier and do performance to archive tier offloads. If it’s block storage as your performance tier then you have to have your hot/cool tier object storage within capacity tier to then be allowed to use an archive tier.
To use an archive tier you’ve got to match the storage type used within the rest of the SOBR’s object storage. So Azure performance/capacity tier means Azure Archive tier. AWS performance/capacity tier means AWS Archive tier.
Onto your questions:
“i create a second immutable repository with longer immutability, say 1 year. After moving the full backup, will it calculate ‘initial backup date + 1 year’, or ‘SOBR copy date + 1 year’?”
If this second repository is your immutable repository, the immutability period is set to the retention of your backups offloaded to archive tier. Source: https://helpcenter.veeam.com/docs/backup/vsphere/immutability_archive_tier.html?ver=120
Quote: “The immutability period of a backup file will be equal to its retention period at the moment of archiving. If the retention period is not specified for VeeamZIP backup files or exported backup files, such files will not be made immutable.“
“do i need a new container for this or can i just create a folder within a container?”
An additional container is not mandatory, nor is creating a new storage account. But these don’t cost anything either are are great for both segmenting capacity vs archive tiers and for avoiding capacity limitations on object storage services. Source: https://forums.veeam.com/post403136.html#p403136
Further reading on Azure Limitations (your main one is default capacity per storage account): https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#standard-storage-account-limits
“if a backup is still immutable can it be moved to the archive tier?”
It will be copied to the archive tier if the immutability hasn’t expired as it can’t delete the original copy, a clean up task will delete the original in the hot/cool object storage tier when the immutability expires. Source: https://helpcenter.veeam.com/docs/backup/vsphere/archiving_job.html?ver=120
“copying backups obviously generates a large amount of traffic. Download backup from one repository, upload to another. Is there something i can do to limit the amount of traffic and costs?”
Archiving happens within the cloud and is one of the reasons why proxy appliances within the cloud are mandatory. Rather than pull data down from the cloud (API Calls + Egress bandwidth + strain on your environment) you run a proxy within the cloud, preferably same region for both source and destination object storage tiers, and the proxy appliance will read the blobs from the hot/cool tier, repackage them into larger archive blocks (fewer API calls when/if you need to fetch them back) and writes them to archive tier. This way you’re charged for reads from hot/cool storage, and writes to archive storage, but no bandwidth egress and no strain on your environment.
“what's the risk here? To move data the local Veeam server needs to have modify permissions in both the cool and archive repository. Theoretically a hacker with access to the backup server could stop it copying to archive repository, but still delete from cool repository?”
As @dloseke already said, you’re in for a bad time if your environment is compromised, configuration change monitoring such as via Veeam ONE is useful (though arguably they can attack this too). But whilst they can impact new offloads to archive etc if you’ve got immutability end to end, they can’t tamper or delete your data until such immutability periods expire.
One point to clarify on the suggestion from someone else that the storage account could be deleted from Azure. If you’ve got immutability configured, Microsoft won’t let you delete your storage account or any locked storage. The immutability is something AWS/Microsoft are aware of and actively integrated into. So these vendors prohibit the tampering/modification/deletion of any data, plus protect against the destruction of any of its supporting constructs such as the container or storage account. You’d have to delete your entire Microsoft/AWS account or battle very hard through their support teams to try and overrule this. One notable exception to that is if you’re using AWS in governance mode then someone with the BypassGovernanceRetention permission can circumvent immutability, so compliance mode is best for that. Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html