Object lock greater than retention ...what happens??

  • 8 September 2022

Userlevel 7
Badge +9

A customer uses the copy function for capacity tier to instantly copy on-premises Veeam backups (retention of 30 days) to object storage. They’ve configured object lock on the object storage (60 days) that has a duration longer than the local backup policy. As Veeam cannot delete a backup in object storage at 30 days, will it delete it after the object lock expires (60 days) or does the customer need to manually delete the  files on object storage?

My thoughts are Veeam will clean up those files once the the 60 day lock has expired, but got me scratching my head for sure


Best answer by haslund 8 September 2022, 14:52

View original


Userlevel 7
Badge +21

I am thinking the same thing that Veeam will do the cleaning of the files but due to retention will the files get cleaned from the configuration database and then Veeam does not know about them from that point?  If that is the case then will they remain?  Makes you think. 🤔

Userlevel 7
Badge +14

They will get cleaned up by retention when immutability expires, don’t forget about block generations so it may remain for up to 70 days.

Userlevel 7
Badge +17

If I remember correct, the objects on the object storage are flagged as deleted during the object lock retention time and are deleted as soon as the retention time is over….


Userlevel 7
Badge +6

Yes, my understanding is that the files get marked for deletion, and once the immutability flag expires, the files should be deleted.  That said, I haven’t tested this, but that’s how I understand the system to work.

Userlevel 7
Badge +14

I’ve just came across this and also was wondering what the outcome would look like. I have tried to reproduce this in my lab; 1 restore point retention with 7 days immutable. And now the interesting part is, although I cannot manually delete the restore point as it’s immutable, after each job run, only the must current restore point remains. So both the performance and capacity tier only show 1 restore point. Rescanning the scale-out repository doesn’t change anything, except that it shows skipped backups.

I’m pretty sure, that the backups didn’t get deleted because of the active object lock. More likely they’ve been flagged as deleted and therefore removed from the configuration database; like @JMeixner says. This post from the R&D forums confirms that:

So did anyone try this before?


By the way; with the Hardened Linux Repository it’s different. The backups remain accessible, but you can an informational event in the job logs.

Edit: I’ll now wait till the object lock expires and see if the checkpoint cleanup will process the expired backups.

Userlevel 6
Badge +1

I’d still check object storage from time to time. We had to delete 50+TB last year as it was never deleted due to some bugs/locking issues.

Userlevel 1

I’d still check object storage from time to time. We had to delete 50+TB last year as it was never deleted due to some bugs/locking issues.

Hi Ralf, Even we too facing the same problem, how did you delete them. We couldn’t able to identify how that can be?