Solved

Backup Chains


Userlevel 5
Badge +1

Backup Job Chains…

 

If I keep seven (7) restore points and run an Active Full on Saturday (point #7) what happens to the six (6) files in the prior backup chain? Are they useless but don’t get deleted because we’re keeping seven (7) restore points?

 

 

Backup Copy Jobs are different if not mistaken. If you keep seven (7) restore points and the seventh is a periodic full, the six prior restore points are deleted all at once?

icon

Best answer by jaceg23 28 March 2024, 20:03

View original

21 comments

Userlevel 7
Badge +21

Check this link and note the text - Active Full Backup - User Guide for VMware vSphere (veeam.com)

The active full backup resets a backup chain. All incremental backup files use the latest active full backup file as a new starting point. A previously used full backup file remains on disk until it is automatically deleted according to the retention policy.

Userlevel 7
Badge +17

Hi @jaceg23 -

Imo the best way to see how it all works, is to watch the retention animations Veeam has in a KB. See below, and click the link there for each backup method (Fwd, FFwd, Rev). For your specific question, you would focus on the Fwd method. 

https://www.veeam.com/kb1799

Userlevel 7
Badge +17

Basically speaking, yes, Veeam has to keep all previous restore points in a chain to adhere to your configured retention settings. 

Userlevel 5
Badge +1

Check this link and note the text - Active Full Backup - User Guide for VMware vSphere (veeam.com)

The active full backup resets a backup chain. All incremental backup files use the latest active full backup file as a new starting point. A previously used full backup file remains on disk until it is automatically deleted according to the retention policy.

And this holds true for any incrementals as well? they are deleted per the retention policy?

Userlevel 7
Badge +21

Check this link and note the text - Active Full Backup - User Guide for VMware vSphere (veeam.com)

The active full backup resets a backup chain. All incremental backup files use the latest active full backup file as a new starting point. A previously used full backup file remains on disk until it is automatically deleted according to the retention policy.

And this holds true for any incrementals as well? they are deleted per the retention policy?

That is correct.

Userlevel 7
Badge +17

Only when you reach your configured retention settings...yes. You have to retain a full chain, due example.. 7, or you wouldn't be able to perform a restore. So, for the Fwd method, you need more Repo storage because you can have up to 2 full chains before the previous one is deleted. 

Userlevel 7
Badge +17

Another thing to keep in mind @jaceg23 ...if you use ReFS on Windows or XFS in Linux for your Repo, it is recommended to do Synthetic vs Active fulls as the time it takes to perform is negligible. Just FYI. 

Userlevel 5
Badge +1

What is the best way to tackle this then as far as restore points and periodic fulls go? I understand that periodic fulls will make recovery faster due to less files in the chain to “have to go through”. However, also looking at doing per-vm backups at the repo level. In this case of doing a periodic full, would it make more sense to stray away from per-vm backups? Or do forever incremental, and periodic compaction on the full files?

Userlevel 7
Badge +17

I used to use the latter @jaceg23 (FFwd and periodic compact, etc), but since I now use Immutable storage, the only option there is Fwd. And, I configure synthetic fulls. I actually recommend that route, if you can. 

Userlevel 5
Badge +1

What about using “per-vm” repo vs normal repo? Are synthetic operations faster than traditional operations since Veeam uses existing restore points to create the synthetic full?

Userlevel 7
Badge +17

I'd configure per-vm for 2 reasons...1. To get the most stream usage out of your SAN, 2. because new (starting with v12) copy jobs by default use per-vm. 

Userlevel 7
Badge +21

What about using “per-vm” repo vs normal repo? Are synthetic operations faster than traditional operations since Veeam uses existing restore points to create the synthetic full?

This will be dependent on the underlying OS that being XFS/ReFS.  If you have these then Synthetic fulls work well but you can also set GFS if needed to do a full one a week or month maybe.

Userlevel 7
Badge +21

What about using “per-vm” repo vs normal repo? Are synthetic operations faster than traditional operations since Veeam uses existing restore points to create the synthetic full?

Use Per-VM as it makes a chain per VM so that if something breaks it does not affect the other VMs that you are backing up.  Best method if you ask me.

Userlevel 7
Badge +17

What about using “per-vm” repo vs normal repo? Are synthetic operations faster than traditional operations since Veeam uses existing restore points to create the synthetic full?

Sorry @jaceg23 ...didn’t see the 2nd part of your question re: Synth vs Active Fulls - well, mostly I would say definitely yes, especially if you’re using a Block Clone filesystem like ReFS (Windows) or XFS (Linux). Creating a Synth Full is almost negligible/instantaneous because of how this storage filesystem architecture works. 1. all data is currently on the Repo storage so Veeam just uses all data “local” to the Repo, as such 2. Veeam doesn’t have to traverse the network as is needed to create the Active Full; and 3. Veeam doesn’t have to create snaps on the source VM lessening the load on the prod storage and thus doesn’t hinder production VM performance.

Userlevel 5
Badge +1

All good info guys!! Thank you!  I think what we want to do is GFS to immutable storage. So keep X amount of daily backups, Y amount of weekly, one monthly and one yearly. I assume we’re going to need bigger storage for this because the device we have is only going to give us ~30TB of storage using an iSCSI LUN as immutable storage. I’ve got to read through this more and try to understand GFS. But with GFS that defeats the point of immutability? Since parts of the backup chain will only be immutable for however long you set the immutability period, the daily/weekly/monthly/yearly can expire in set amount of time.

Userlevel 7
Badge +21

You may require more storage yes. Immutable period will be dictated by the retention and GFS settings.

Userlevel 5
Badge +1

This discussion was kind of all over the place :/ my apologies. If I understand correctly, GFS is only for backup copy jobs, not regular backup jobs.  In this case, where we store copy jobs would need to have sufficient space, more so than the repo for regular backup jobs.

Userlevel 7
Badge +21

This discussion was kind of all over the place :/ my apologies. If I understand correctly, GFS is only for backup copy jobs, not regular backup jobs.  In this case, where we store copy jobs would need to have sufficient space, more so than the repo for regular backup jobs.

If that is how you are configuring it yes.  You can do GFS on regular backup jobs as well FYI. 😉

Userlevel 5
Badge +1

Everyone’s answer is a good answer. Thanks all!

Userlevel 5
Badge +1

What about using “per-vm” repo vs normal repo? Are synthetic operations faster than traditional operations since Veeam uses existing restore points to create the synthetic full?

Use Per-VM as it makes a chain per VM so that if something breaks it does not affect the other VMs that you are backing up.  Best method if you ask me.

Sorry I missed this. So if you have say, three VMs clubbed together in a job, and do per-vm, if one breaks in said job, how do you fix it or do you just start a “new” job for that particular VM and remove it from the existing?

Userlevel 7
Badge +21

What about using “per-vm” repo vs normal repo? Are synthetic operations faster than traditional operations since Veeam uses existing restore points to create the synthetic full?

Use Per-VM as it makes a chain per VM so that if something breaks it does not affect the other VMs that you are backing up.  Best method if you ask me.

Sorry I missed this. So if you have say, three VMs clubbed together in a job, and do per-vm, if one breaks in said job, how do you fix it or do you just start a “new” job for that particular VM and remove it from the existing?

You can run a full backup of just that one VM to start a new chain in the job itself.  That is the good thing with the new format as each VM has its own chain of files.

Comment