Backing up to S3 compatible object storage does not use traditional backup files and chains and the concept of incrementals is somewhat foreign. It keeps the blocks that are required in order to achieve the retention period set and assembles those blocks of data to retrieve the data required if you were to perform a restore from object. If a block of data is no longer needed to achieve the retention policy set, assuming that block is not flagged for immutability, the block is deleted. If the block is flagged for immutability, it’ll be marked for deletion but cannot be deleted until the immutable flag expires. This is what makes object storage so much more efficient than regular block storage - you don’t have to keep duplicate blocks of data across multiple full and incremental backup files.
Backing up to S3 compatible object storage does not use traditional backup files and chains and the concept of incrementals is somewhat foreign. It keeps the blocks that are required in order to achieve the retention period set and assembles those blocks of data to retrieve the data required if you were to perform a restore from object. If a block of data is no longer needed to achieve the retention policy set, assuming that block is not flagged for immutability, the block is deleted. If the block is flagged for immutability, it’ll be marked for deletion but cannot be deleted until the immutable flag expires. This is what makes object storage so much more efficient than regular block storage - you don’t have to keep duplicate blocks of data across multiple full and incremental backup files.
Very very good answer!!
From what I understand that if I copy a full and the next week it is an "incremental" the chain will be consistent? Because the primary job runs every day but the backup copies only one day at week
If you have multiple backup jobs that run locally, so subsequent incremental backups, or even subsequent fulls, the next time the copy job to object runs, it’s going to grab any and all data required from the local repo to the object repo to obtain the required data set specified in your retention policy. As I recall, the local restore points won’t be deleted even if the retention policy would mandate to do so because the copy job is going to hold those until that data can be copied. I’m a little rough around this concept, but that should be the case I believe.
The question I typically have though is why you’re copying periodically and only once a week. I’d want the latest data available in my object repo as soon as possible in most cases. Unless you have a very high change rate, your incremental changes are probably going to be relatively small if copy time and bandwidth constraints are a concern. I always feel like putting off to the end of the week or weekly or whatever are generally kicking the can down the road as you’re just adding more data to be copied when the job can actually run. But still, I get, at least with bandwidth issues, that maybe the weekend is the only time you can run the copy job, etc.