Skip to main content
Solved

Veeam + Wasabi + Lifecycle Management Help


Hello
A customer configured his Wasabi bucket with immuability set on Wasabi side. 
I know is not the recommendation in term of configuration normaly I set versionning and objectlock on the bucket only and the immuability is set in the repository Veeam.

The problem is the consumption continues to grow on the bucket. In Veeam the number of restore points are correct but I have a lot of error in the SOBR task like: Not good!

A lifecycle policies has been set, but I never used this kind of rule so I m not sure of the result especialy with Veeam.
 


In Veeam in the repository we have this warning:
 

The rules is in place but…. the delete is not working 

Have you already handle this kind of configuration ?

Never had to use lifecycle management as like you said we never set the retention count within object storage and set this in Veeam instead to control which is the BP.

Not sure if there is a way around this.  Maybe start a new bucket without retention set then have Veeam control it. Keep the old one and age it out.


Never had to use lifecycle management as like you said we never set the retention count within object storage and set this in Veeam instead to control which is the BP.

Not sure if there is a way around this.  Maybe start a new bucket without retention set then have Veeam control it. Keep the old one and age it out.

Oui, j'arrive à la même conclusion, démarrer un nouveau bucket à partir de zéro, mais je suis curieux de savoir s'il existe un moyen de gérer cela différemment.


Never had to use lifecycle management as like you said we never set the retention count within object storage and set this in Veeam instead to control which is the BP.

Not sure if there is a way around this.  Maybe start a new bucket without retention set then have Veeam control it. Keep the old one and age it out.

Oui, j'arrive à la même conclusion, démarrer un nouveau bucket à partir de zéro, mais je suis curieux de savoir s'il existe un moyen de gérer cela différemment.

Not sure there is with the Immutability and the retention set within Wasabi.  Maybe contact Wasabi support to see if there is something you are missing?


Yes I sent a message in my contacts working at Wasabi and I will send a message to the support, cause I dont have just few GB but many TB of datas 😅


@Chris.Childerhose is giving good advice.
Veeam is and should always be the “point of control” in terms of the data management.
Using storage specific features that are not coordiated by Veeam can be too labor intensive and complex. That said, if there are features you would like to see integrated between Veeam and Wasabi, please let me know!


@Chris.Childerhose is giving good advice.
Veeam is and should always be the “point of control” in terms of the data management.
Using storage specific features that are not coordiated by Veeam can be too labor intensive and complex. That said, if there are features you would like to see integrated between Veeam and Wasabi, please let me know!

Thanks, Drew.  😎


Hi Philippe

I never use lifecycle for bucket BUT you can consider the block generation period used by Veeam to reduce I/O on cloud storage

https://helpcenter.veeam.com/docs/backup/vsphere/performance_tier_block_generation.html?ver=120

If you have set immutability for 30 days on the job Wasabi retain the file for 40 days. So lifecycle have nothing to delete.

You can check more info in Wasabi at this link where there are Veeam specifications for lifecycle.

https://docs.wasabi.com/docs/life-cycle-delete-markers?highlight=lifecycle

 

 

 


Hey @drews , I know, this is the first time I see this kind of configuration. 
Veeam seems to be aware of this as we have a warning about versioning detection enabled but in this case we also have immutability so when you mix everything together… I don't guarantee the result


Lifecycle management rules/policies should never be used on buckets that are managed/used by VBR:

https://helpcenter.veeam.com/docs/backup/vsphere/object_storage_repository_cal.html?zoom_highlight=lifecycle&ver=120

  • Data in an object storage bucket or container must be managed solely by Veeam Backup & Replication, including retention (in case you enable Object Lock and Versioning features on an S3 bucket or version-level WORM on an Azure container) and data management. Enabling lifecycle rules is not supported, and may result in backup and restore failures.

Yes ! 
For me the solution is to recreate a new bucket with the good configuration and start from scratch the copy.
But I was looking a way to delete the “old files”  in this bucket but I'm afraid that the lock is applied continuously cause I don't have any deletion.


Yes ! 
For me the solution is to recreate a new bucket with the good configuration and start from scratch the copy.
But I was looking a way to delete the “old files”  in this bucket but I'm afraid that the lock is applied continuously cause I don't have any deletion.

I would wait to see what the Wasabi and Veeam support organizations come back with for your options.  When using compliance mode your options are limited.  Waiting for the objects to become unlocked is probably your only option, but I hope they can assist you.


Lifecycle management rules/policies should never be used on buckets that are managed/used by VBR:

https://helpcenter.veeam.com/docs/backup/vsphere/object_storage_repository_cal.html?zoom_highlight=lifecycle&ver=120

  • Data in an object storage bucket or container must be managed solely by Veeam Backup & Replication, including retention (in case you enable Object Lock and Versioning features on an S3 bucket or version-level WORM on an Azure container) and data management. Enabling lifecycle rules is not supported, and may result in backup and restore failures.

I was just coming to say just this. Think of a Veeam Object Repository as an owned item by the VBR that is writing to it; it owns it and it should be the only thing that ever interacts with the data.

If you need something to point to for the customer the BP guide explicitly states this. TBH though, that was probably also written by @SteveF ;)

https://bp.veeam.com/vbr/2_Design_Structures/D_Veeam_Components/D_backup_repositories/object.html.


Lifecycle management rules/policies should never be used on buckets that are managed/used by VBR:

https://helpcenter.veeam.com/docs/backup/vsphere/object_storage_repository_cal.html?zoom_highlight=lifecycle&ver=120

  • Data in an object storage bucket or container must be managed solely by Veeam Backup & Replication, including retention (in case you enable Object Lock and Versioning features on an S3 bucket or version-level WORM on an Azure container) and data management. Enabling lifecycle rules is not supported, and may result in backup and restore failures.

Spot on!


Yes ! 
For me the solution is to recreate a new bucket with the good configuration and start from scratch the copy.
But I was looking a way to delete the “old files”  in this bucket but I'm afraid that the lock is applied continuously cause I don't have any deletion.

I would recommend this approach but as @SteveF suggested, let’s wait for the support response!


I agree with you guys and I have the same approach. Finaly after discuss with the customer, a new bucket has been enable with the good settings this time. Veeam will pilot the immuability. 


Still just getting my feet wet with Wasabi. Glad I came across this. I’ve done lots of testing but I have it configured correctly. I did come across this issue initially when setting things up and firing off as many jobs as I could to learn how it would work with the immutability.

 

Hopefully others with this issue will see this and use VBR to manage it. 

 


As a friendly reminder, here is the KB article on Wasabi’s Academy for using Veeam with Wasabi object lock - https://docs.wasabi.com/docs/wasabi-veeam-object-lock-integration


Comment