Skip to main content
Solved

Backup Job configuration with S3 repository immuability


Stabz
Forum|alt.badge.img+8
  • On the path to Greatness
  • 363 comments

 I have a question about backup job configuration when using S3 repository with immuability enabled.

In the job retention if I set only 14 days without GFS, it means Forever incremental mode for me.
If I click next I don’t have any error or warning to enable GFS like I could have with a Hardened repository.
I remember on object, we only copy new blocks not files. Once immutability expires, unused/expires blocks are just deleted, but it’s not really clear for me.

More intrigant Veeam is in capacity to “Merge” the files … and apply the retention ...

The immutability works if I tried to delete the files

 

Best answer by MicoolPaul

It’s an umbrella term that doesn’t quite explain everything going on here.

 

Firstly from a backup process:

When we backup data, let’s say from a VM. The proxy will fetch a block, based on the block size (256KB/512KB/1MB/4MB/8MB) configured within the job. Once we have this block, we compress and dedupe, then save this as a blob within S3/object storage of your choice (I’ll just call it S3 from now on). Let’s focus on this single blob we’ve saved.

 

With block storage, we either referenced that block until we did an active full again, or a synthetic full. With a synthetic full, if we used block cloning we carried on referencing that block, but otherwise we took a copy of it.

 

Within object storage we don’t have active/synthetic fulls (unless you specifically want active fulls on an archive tier of a SOBR). This means that until that block changes, every backup subsequently will reference this blob.

 

From an immutability perspective, we check if that blob is still the latest “version” of data (was it needed by the latest backup, or did the block change) if the block hasn’t changed, we periodically update the immutability retention of that block, because our latest backups require it, and we require immutability for X number of days. If the block has been updated, we don’t update the immutability duration, we let that immutability duration expire when specified, and then we can perform lifecycle management on the blob (delete when no backups require it anymore)

 

and here is the Link! https://www.veeam.com/blog/veeam-backup-replication-s3-immutability-block-generation.html

View original
Did this topic help you find an answer to your question?

5 comments

MicoolPaul
Forum|alt.badge.img+23
  • 2370 comments
  • October 10, 2024

Hi @Stabz 🙂 what’s the question exactly? Or more informative in general?

 

I’d suggest starting here to understand what’s happening from VBR 12.2 onwards, it has additional links for learning, and if you’ve got any questions please fire away!


Stabz
Forum|alt.badge.img+8
  • Author
  • On the path to Greatness
  • 363 comments
  • October 10, 2024

Oops ! Forgot the question.

How Veeam could merge files and apply retention on object supposed immutable ? 
@MicoolPaul  where is the link ? :D


MicoolPaul
Forum|alt.badge.img+23
  • 2370 comments
  • Answer
  • October 10, 2024

It’s an umbrella term that doesn’t quite explain everything going on here.

 

Firstly from a backup process:

When we backup data, let’s say from a VM. The proxy will fetch a block, based on the block size (256KB/512KB/1MB/4MB/8MB) configured within the job. Once we have this block, we compress and dedupe, then save this as a blob within S3/object storage of your choice (I’ll just call it S3 from now on). Let’s focus on this single blob we’ve saved.

 

With block storage, we either referenced that block until we did an active full again, or a synthetic full. With a synthetic full, if we used block cloning we carried on referencing that block, but otherwise we took a copy of it.

 

Within object storage we don’t have active/synthetic fulls (unless you specifically want active fulls on an archive tier of a SOBR). This means that until that block changes, every backup subsequently will reference this blob.

 

From an immutability perspective, we check if that blob is still the latest “version” of data (was it needed by the latest backup, or did the block change) if the block hasn’t changed, we periodically update the immutability retention of that block, because our latest backups require it, and we require immutability for X number of days. If the block has been updated, we don’t update the immutability duration, we let that immutability duration expire when specified, and then we can perform lifecycle management on the blob (delete when no backups require it anymore)

 

and here is the Link! https://www.veeam.com/blog/veeam-backup-replication-s3-immutability-block-generation.html


Stabz
Forum|alt.badge.img+8
  • Author
  • On the path to Greatness
  • 363 comments
  • October 11, 2024

thanks @MicoolPaul for the explanation.

But it’s still vague for me how Veeam could “merge” the block. 
I assume if the block dont change we apply a reference into it and update the immuabilty period.

But How Veeam could remove from the backup view side old retenton point, Veeam doesn’t show the data but the block are still present into the S3 storage until the end of the immuability ? 


MicoolPaul
Forum|alt.badge.img+23
  • 2370 comments
  • October 11, 2024

Hey 🙂

 

As you can see the “merge” takes 8 seconds, we’re not actually “merging” anything due to the way object storage works, id suggest reading the logs for a more verbose view on what happens at this specific step, I’d expect this to be metadata operations (although I know we have another separate metadata step earlier in the chain).

 

IMO, merge isn’t the most appropriate term here for object storage


Comment