Skip to main content
Answer

VBR replication to the cloud

  • April 7, 2022
  • 9 comments
  • 88 views

Forum|alt.badge.img

This is more of a processing issue but . . . I have one VBR system that processes some rather large file servers.  We’re going to be setting up Scale Out Repo to Wasabi into an object locking bucket.

When Veeam locally merges the latest incremental into a new full, does it push the complete newly created full out to the bucket or is there some magic that happens where Veeam is only updating the current full that’s sitting out in the cloud.  People with small internet pipes want to know.  With a small domain server it’s no big deal to push 20gig but when it’s 25TB…..

I’m also going to assume since Veeam is in control of the object locking and retention, I won’t have to figure out when things will be unlocked and the magical merge is going to happen.

Thanks ahead of time.

Best answer by MicoolPaul

Hi,

 

There isn’t a “merge” at all actually, each block is stored individually on the object storage repository you’re using and the metadata stitches them together when required.

 

This provides multiple benefits such as:

  • Efficient recoveries as if 90% of the data persists on a local backup repo, only the missing 10% needs to be fetched from object, not everything.
  • Supports immutability as the individual blobs created don’t require manipulation after upload.
  • Provides superior data efficiency and lower WAN bandwidth via the re-use of blobs when multiple restore points reference the same blob, data doesn’t get uploaded unnecessarily

Hope this helps!

9 comments

Chris.Childerhose
Forum|alt.badge.img+21

Veeam actually will push out the inactive chain once the newly created full back is completed.  Please see these two Veeam help pages on Move vs Copy modes and how the process works with the chains -

Moving Backups to Capacity Tier - User Guide for VMware vSphere (veeam.com)

Copying Backups to Capacity Tier - User Guide for VMware vSphere (veeam.com)


JMeixner
Forum|alt.badge.img+16
  • On the path to Greatness
  • April 7, 2022

And the objects already on the object storage cannot be modified by Veeam if you use immutability. Then the objects are locked for all actions except read until the retention time is over.


MicoolPaul
Forum|alt.badge.img+23
  • Answer
  • April 7, 2022

Hi,

 

There isn’t a “merge” at all actually, each block is stored individually on the object storage repository you’re using and the metadata stitches them together when required.

 

This provides multiple benefits such as:

  • Efficient recoveries as if 90% of the data persists on a local backup repo, only the missing 10% needs to be fetched from object, not everything.
  • Supports immutability as the individual blobs created don’t require manipulation after upload.
  • Provides superior data efficiency and lower WAN bandwidth via the re-use of blobs when multiple restore points reference the same blob, data doesn’t get uploaded unnecessarily

Hope this helps!


JMeixner
Forum|alt.badge.img+16
  • On the path to Greatness
  • April 7, 2022

Thank you for the detailled clarification.I was not completely aware of the great usage of blob re-use.


Forum|alt.badge.img
  • Author
  • Comes here often
  • April 7, 2022

Just to clarify (and I can leave the immutability out of the question), when a system stored on the performance tier applies retention, merges the oldest incremental into a full, Veeam will push out the old data from the capacity tier and add just the changed blobs which will represent the oldest restore point when the metadata is stitched together when requested for restoration?

Thanks for all your help


MicoolPaul
Forum|alt.badge.img+23

Thank you for the detailled clarification.I was not completely aware of the great usage of blob re-use.

Thanks 🙂 hope it had helped, there’s an option for archive tier to not reuse any blobs which makes sense for data integrity additional assurances when considering long retention periods such as 10+ years, you wouldn’t want some commonly shared blobs being deleted that ruined your entire archive history, so they can all keep their own “share nothing” blobs, it’s more expensive of course, but additional data resiliency always has been…


JMeixner
Forum|alt.badge.img+16
  • On the path to Greatness
  • April 7, 2022

Thank you for the detailled clarification.I was not completely aware of the great usage of blob re-use.

Thanks 🙂 hope it had helped, there’s an option for archive tier to not reuse any blobs which makes sense for data integrity additional assurances when considering long retention periods such as 10+ years, you wouldn’t want some commonly shared blobs being deleted that ruined your entire archive history, so they can all keep their own “share nothing” blobs, it’s more expensive of course, but additional data resiliency always has been…

Yes, for long time retention this makes sense. But for your “normal” backup chains with retention of weeks or several months the blob re-use is a great feature.


Chris.Childerhose
Forum|alt.badge.img+21

Just to clarify (and I can leave the immutability out of the question), when a system stored on the performance tier applies retention, merges the oldest incremental into a full, Veeam will push out the old data from the capacity tier and add just the changed blobs which will represent the oldest restore point when the metadata is stitched together when requested for restoration?

Thanks for all your help

That should be correct.


MicoolPaul
Forum|alt.badge.img+23

Just to clarify (and I can leave the immutability out of the question), when a system stored on the performance tier applies retention, merges the oldest incremental into a full, Veeam will push out the old data from the capacity tier and add just the changed blobs which will represent the oldest restore point when the metadata is stitched together when requested for restoration?

Thanks for all your help

I’m gonna sound like a typical consultant here: it depends.

 

You have two options, you have move to cloud and copy to cloud. Move offloads your inactive chains after the retention period for keeping backups local expires. Copy to doesn’t have this limitation and will copy active chains. In both scenarios, just changed blobs.

 

Copy to can handle your incrementals, move will do an entire chain, as you can’t have a local incremental without the full it relies on etc!

 

I’d strongly recommend reading the SOBR section of the Veeam documentation, and see whether move or copy would better suit your needs, link here: https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier.html?ver=110