Skip to main content

We have an onpremise Veeam server that runs the following jobs:

1. Backup to local storage (weekly full, daily incrementals)
2. Copy job to Linux hardened repository (immutable)
3. Copy job to Azure Blob cool storage (immutable)

Now i would like to have the Azure monthly backups moved to archive tier. I'm assuming i have to use SOBR for this. The following question arise:

  • i create a second immutable repository with longer immutability, say 1 year. After moving the full backup, will it calculate ‘initial backup date + 1 year’, or ‘SOBR copy date + 1 year’?
  • do i need a new container for this or can i just create a folder within a container?
  • if a backup is still immutable can it be moved to the archive tier?
  • copying backups obviously generates a large amount of traffic. Download backup from one repository, upload to another. Is there something i can do to limit the amount of traffic and costs?
  • what's the risk here? To move data the local Veeam server needs to have modify permissions in both the cool and archive repository. Theoretically a hacker with access to the backup server could stop it copying to archive repository, but still delete from cool repository?

Let me say it now, and I’ll say it a lot below, I’m not super familiar with Azure as I don’t do much cloud computing and I use Wasabi for S3 compatible object storage.  But I’ll take a crack at this, and people can correct me if I’m wrong….

 

  • My understanding is that if you wanted to move data to an archive tier and you’re already using a SOBR, you would just add your archive bucket/container to the SOBR as the archive tier.  Perhaps I misunderstand that though as I’m not using archive tiers.
  • I believe the immutability flag is based on when the file is written to the repository and doesn't have anything to do with the retention policy (based on when the backup was taken).  This is the reason that if your retention policy is shorter than your immutability flag, immutability will hold on to the file which will be marked for deletion and should be deleted after the immutability flag expires.
  • I personally when using Wasabi would use a different bucket.  If the same server is accessing the bucket (in the case of the Wasabi/S3 compatible storage), then I think you’d be fine to use a different folder.  I can’t speak for Azure containers though - I was under the impression that the different tiers use different containers and it’s not set at the folder level.  Again, not super knowledgeable of Azure storage here, so I could be wrong.
  • My guess here is that if you had a proxy server in Azure that can access both containers, this would possibly eliminate the data egress/ingress because it’s all staying withing Azure.  That said, I’m not azure expert, but that’s what I’d at least be researching vs downloading to your on-premise server and then back up.
  • If you have a hacker in the Veeam server, you’ve already got problems.  My recommendation would be deploying Veeam ONE and using it to monitor your jobs/infra and alert of changes made.  This would include stopping the copies to the archive tier.  I haven’t spent a ton of time with ONE, but I know that there is some alerting for these sorts of things.  Also note that with proper immutability, even the Veeam server can’t delete the data with the immutable flag set, so while they could stop moving data to the archive tier, newer data would still be protected in the capacity tier.  But if you are trying to protect your archive data, you’re going to need some monitoring in place to make sure that continues to happen.

I’m not super familiar with using VBR to store backups in Azure, but I am more familiar with Azure for other uses.

Azure “blobs” (like “objects” in S3 storage) in object storage as you’re referring to, each “blob” has it’s own storage tier, “Hot”, “Cool”, “Archive”, so assuming Veeam sets the tier per object then there should be no need for separate containers or even separate folders in Azure. When you create a “container” you set a default tier, but when Veeam uploads objects it can specify a different tier to upload the objects as (I don’t know if Veeam actually does this, I just know that Azure supports it), the “default” tier for Azure Blob Storage is just what tier Azure will use in the event the uploading software doesn't specify a tier in the upload process.

So assuming Veeam actually takes advantage of the Azure capabilities, it shouldn’t be complicated to set up on the Azure side.

Unfortunately I’m not fully aware of how Veeam handles immutability with Object Storage platforms at all, but my assumption is if it’s set up properly then once data leaves Veeam and goes into Azure, it will be secured from anyone without other access to the Azure account. Blob Storage does support blob versioning and deletion protection, but I believe if someone had access to the Azure account they could still do things like delete the entire container, which would delete anything inside of it even if the contents have historical versions and/or protection from early deletion.

So from that perspective it’s basically the same as any other storage platform. Once the data is out of Veeam, it’s “immutable”, but not to someone with access to the place where the data is stored.


@BackupBytesTim Thanks for explaining the different storage tiers within the same container.  I wasn’t aware that it did that but it makes sense to me at least.  More and more I learn that I need to learn a lot more about Azure and how it works.  :-)


We have an onpremise Veeam server that runs the following jobs:

1. Backup to local storage (weekly full, daily incrementals)
2. Copy job to Linux hardened repository (immutable)
3. Copy job to Azure Blob cool storage (immutable)

Now i would like to have the Azure monthly backups moved to archive tier. I'm assuming i have to use SOBR for this. The following question arise:

  • i create a second immutable repository with longer immutability, say 1 year. After moving the full backup, will it calculate ‘initial backup date + 1 year’, or ‘SOBR copy date + 1 year’?
  • do i need a new container for this or can i just create a folder within a container?
  • if a backup is still immutable can it be moved to the archive tier?
  • copying backups obviously generates a large amount of traffic. Download backup from one repository, upload to another. Is there something i can do to limit the amount of traffic and costs?
  • what's the risk here? To move data the local Veeam server needs to have modify permissions in both the cool and archive repository. Theoretically a hacker with access to the backup server could stop it copying to archive repository, but still delete from cool repository?

Correct you must use a SOBR as you can’t backup directly to archive tier. If your performance tier within a SOBR is object storage, you can skip capacity tier and do performance to archive tier offloads. If it’s block storage as your performance tier then you have to have your hot/cool tier object storage within capacity tier to then be allowed to use an archive tier.

 

To use an archive tier you’ve got to match the storage type used within the rest of the SOBR’s object storage. So Azure performance/capacity tier means Azure Archive tier. AWS performance/capacity tier means AWS Archive tier.

 

Onto your questions:

“i create a second immutable repository with longer immutability, say 1 year. After moving the full backup, will it calculate ‘initial backup date + 1 year’, or ‘SOBR copy date + 1 year’?”

If this second repository is your immutable repository, the immutability period is set to the retention of your backups offloaded to archive tier. Source: https://helpcenter.veeam.com/docs/backup/vsphere/immutability_archive_tier.html?ver=120

Quote: “The immutability period of a backup file will be equal to its retention period at the moment of archiving. If the retention period is not specified for VeeamZIP backup files or exported backup files, such files will not be made immutable.“


“do i need a new container for this or can i just create a folder within a container?”

An additional container is not mandatory, nor is creating a new storage account. But these don’t cost anything either are are great for both segmenting capacity vs archive tiers and for avoiding capacity limitations on object storage services. Source: https://forums.veeam.com/post403136.html#p403136

Further reading on Azure Limitations (your main one is default capacity per storage account): https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#standard-storage-account-limits

 

“if a backup is still immutable can it be moved to the archive tier?”

It will be copied to the archive tier if the immutability hasn’t expired as it can’t delete the original copy, a clean up task will delete the original in the hot/cool object storage tier when the immutability expires. Source: https://helpcenter.veeam.com/docs/backup/vsphere/archiving_job.html?ver=120

 

“copying backups obviously generates a large amount of traffic. Download backup from one repository, upload to another. Is there something i can do to limit the amount of traffic and costs?”

Archiving happens within the cloud and is one of the reasons why proxy appliances within the cloud are mandatory. Rather than pull data down from the cloud (API Calls + Egress bandwidth + strain on your environment) you run a proxy within the cloud, preferably same region for both source and destination object storage tiers, and the proxy appliance will read the blobs from the hot/cool tier, repackage them into larger archive blocks (fewer API calls when/if you need to fetch them back) and writes them to archive tier. This way you’re charged for reads from hot/cool storage, and writes to archive storage, but no bandwidth egress and no strain on your environment.


“what's the risk here? To move data the local Veeam server needs to have modify permissions in both the cool and archive repository. Theoretically a hacker with access to the backup server could stop it copying to archive repository, but still delete from cool repository?”

As @dloseke already said, you’re in for a bad time if your environment is compromised, configuration change monitoring such as via Veeam ONE is useful (though arguably they can attack this too). But whilst they can impact new offloads to archive etc if you’ve got immutability end to end, they can’t tamper or delete your data until such immutability periods expire.


One point to clarify on the suggestion from someone else that the storage account could be deleted from Azure. If you’ve got immutability configured, Microsoft won’t let you delete your storage account or any locked storage. The immutability is something AWS/Microsoft are aware of and actively integrated into. So these vendors prohibit the tampering/modification/deletion of any data, plus protect against the destruction of any of its supporting constructs such as the container or storage account. You’d have to delete your entire Microsoft/AWS account or battle very hard through their support teams to try and overrule this. One notable exception to that is if you’re using AWS in governance mode then someone with the BypassGovernanceRetention permission can circumvent immutability, so compliance mode is best for that. Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html


Thanks @MicoolPaul for all of this.  I knew you’d have the answer to these questions and I’m happy that I had at least a few of them right or close to it.


As @dloseke already said, you’re in for a bad time if your environment is compromised, configuration change monitoring such as via Veeam ONE is useful (though arguably they can attack this too). But whilst they can impact new offloads to archive etc if you’ve got immutability end to end, they can’t tamper or delete your data until such immutability periods expire.


Thanks guys for the extensive explanation. Just tipping my toes here so excuse me if i come off ignorant. Would this mean i'd have to copy to archive tier well before performance immutability ends? This buys me some time when someone stops the archiving process.


Hi @Knuppel -

I’m just following up on your Archive Tier question. Did any of the provided comments answer your question sufficiently? If so, we ask you select which comment best helped you as ‘Best Answer’ so others who come across your post with a similar question may benefit.

If you do indeed have more questions, don’t hesitate to ask.

Thank you.


Comment