Solved

copy job to s3

  • 28 November 2023
  • 3 comments
  • 83 views

Userlevel 2
  • Comes here often
  • 8 comments

Hi

I am configuring a backup copy to S3 (immutable)   The type of copy is periodic so that it only stores a weekly backup in the cloud (one month retention)    My questions are: how does the S3 backup chain work? Same as locally?      First week will be a full, second backup copy will be incremental but... what happens when the first backup meets the retention? Veeam converts the oldest incremental to a full?

 

I'm right?  

Thanks 

icon

Best answer by dloseke 28 November 2023, 22:01

View original

3 comments

Userlevel 7
Badge +6

Backing up to S3 compatible object storage does not use traditional backup files and chains and the concept of incrementals is somewhat foreign.  It keeps the blocks that are required in order to achieve the retention period set and assembles those blocks of data to retrieve the data required if you were to perform a restore from object.  If a block of data is no longer needed to achieve the retention policy set, assuming that block is not flagged for immutability, the block is deleted.  If the block is flagged for immutability, it’ll be marked for deletion but cannot be deleted until the immutable flag expires.  This is what makes object storage so much more efficient than regular block storage - you don’t have to keep duplicate blocks of data across multiple full and incremental backup files.

Userlevel 2

Backing up to S3 compatible object storage does not use traditional backup files and chains and the concept of incrementals is somewhat foreign.  It keeps the blocks that are required in order to achieve the retention period set and assembles those blocks of data to retrieve the data required if you were to perform a restore from object.  If a block of data is no longer needed to achieve the retention policy set, assuming that block is not flagged for immutability, the block is deleted.  If the block is flagged for immutability, it’ll be marked for deletion but cannot be deleted until the immutable flag expires.  This is what makes object storage so much more efficient than regular block storage - you don’t have to keep duplicate blocks of data across multiple full and incremental backup files.

Very very good answer!!

From what I understand that if I copy a full and the next week it is an "incremental" the chain will be consistent? Because the primary job runs every day but the backup copies only one day at week

Userlevel 7
Badge +6

If you have multiple backup jobs that run locally, so subsequent incremental backups, or even subsequent fulls, the next time the copy job to object runs, it’s going to grab any and all data required from the local repo to the object repo to obtain the required data set specified in your retention policy.  As I recall, the local restore points won’t be deleted even if the retention policy would mandate to do so because the copy job is going to hold those until that data can be copied.  I’m a little rough around this concept, but that should be the case I believe.

The question I typically have though is why you’re copying periodically and only once a week.  I’d want the latest data available in my object repo as soon as possible in most cases.  Unless you have a very high change rate, your incremental changes are probably going to be relatively small if copy time and bandwidth constraints are a concern.  I always feel like putting off to the end of the week or weekly or whatever are generally kicking the can down the road as you’re just adding more data to be copied when the job can actually run.  But still, I get, at least with bandwidth issues, that maybe the weekend is the only time you can run the copy job, etc.  

Comment