One of the great features we’ve gotten with V12 is object-lock with Azure S3.
With new storage accounts and containers it works as he explained, which I could prove in a few environments already.
My (and some of my customer’s) question now is: can we enable it safely and supportedly on an existing blob bucket with let’s say hundreds of TB of data already inside?
First, you would have to enable “version level immutabilty” within the container:
Quite easy while creating a new container like Rick has shown, but if you would want to do this afterwards, you first had to enable a policy on the container:
Having this policy also makes the container differ from a freshly set-up immutable container. The latter would not have this policy.
To add this policy one would also have to enable versioning for the storage account beforehand:
I tried that in a testing environment and ended up being able to activate immutabilty in the S3 based repository which builds a SOBR capacity tier.
Problem still was, that no offloading to the S3 capacity tier took place.
Error shown in Veeams storage managment history log: Object storage cleanup failed: Impossible to start a new log because there are other logs.
Any hints what I maybe missing? Support for migrating to immutability would be more than welcome as lots of environments already have tons of data in Azure buckets already. They would not want to ingest those all over again.
Thanks,
Michael