Skip to main content

One of the great features we’ve gotten with V12 is object-lock with Azure S3.

@Rick Vanover did a great walkthrough for MS Ignite last year with a V12 beta already.

With new storage accounts and containers it works as he explained, which I could prove in a few environments already.

 

My (and some of my customer’s) question now is: can we enable it safely and supportedly on an existing blob bucket with let’s say hundreds of TB of data already inside?

 

First, you would have to enable “version level immutabilty” within the container:

 

Quite easy while creating a new container like Rick has shown, but if you would want to do this afterwards, you first had to enable a policy on the container:

 

Having this policy also makes the container differ from a freshly set-up immutable container. The latter would not have this policy.

To add this policy one would also have to enable versioning for the storage account beforehand:

 

I tried that in a testing environment and ended up being able to activate immutabilty in the S3 based repository which builds a SOBR capacity tier.

Problem still was, that no offloading to the S3 capacity tier took place.

Error shown in Veeams storage managment history log: Object storage cleanup failed: Impossible to start a new log because there are other logs.

 

Any hints what I maybe missing? Support for migrating to immutability would be more than welcome as lots of environments already have tons of data in Azure buckets already. They would not want to ingest those all over again.

Thanks,

Michael

Not Azure specific but I always thought that Immutability was needing to be turned on when creating a bucket and not after it was created?  Will be watching this thread to see the outcome.

It seems checking Wasabi I can turn versioning on for a bucket already created and then Immutability.  Not sure if Azure works the same or not but it seems like it based on your screenshots.

Keep us posted on the fix if there is one.


From the v12 User Guide: 

  • >Azure Blob Storage] Do NOT enable immutability for already existing containers in the Azure portal. Otherwise, Veeam Backup & Replication will not be able to process these containers properly and it may result in data loss.

From the v12 User Guide: 

  • >Azure Blob Storage] Do NOT enable immutability for already existing containers in the Azure portal. Otherwise, Veeam Backup & Replication will not be able to process these containers properly and it may result in data loss.

There we go.  Thanks for that @randyweis 👍🏼


There is a saying in German: “Those ones being able to read bear a clear advantage.”

Turned out to be the case here… 😉

 

So, it’s not supported and explains why I wasn’t successful. 

 

Though in general I don’t see an obvious reason why it should not be possible:

Enabling the policy does attach the chosen lock timeout to all objects in the bucket already.

After that newly ingested objects do inherit the timeout set within Veeam. I know of several environments with PBs in their buckets. This could be a lot of money and time to be spared...


Comment