Skip to main content

Hey everyone! 

Happy Friday 😃

I had a quick question regarding Per-VM Backup chains. If I remember rightly, I think I read somewhere that in V12 it would be enabled by default. 

Are Per-VM backup chains now best practice? Anything to be aware of when transitioning to them on already existing backup jobs that do not utilise Per-VM backup jobs currently?

Thank you!

For backup copy it’s now the only option I believe, except Veeam Agent for Windows Failover Cluster backups I vaguely remember.

 

The main downside to per-VM backup files is that it’ll consume more space vs a per-job backup file due to the lack of data efficiency techniques being processed across VMs (think how many identical bytes of OS data you’ve probably got in a backup job).

 

But on the plus side, it’s awesome from a performance stand-point.

 

If you’re upgrading from v11, be sure to upgrade your metadata format as <v12 had a per backup job metadata file instead of per VM metadata file. It’s a manual process and you can’t run the backup job whilst it’s migrating the metadata.


The per-VM Chains are needed for moving single VMs from Job to Job.

And it will not be updated automatically after the upgrade to V12.


After change to per-VM Chain, an Active-Full backup is automatically created on the next job run.


Thank you :) 


If you use a dedup storage appliance or XFS/ReFS fast clone, duplicated data shouldn’t come into play. Regardless...from a performance perspective, it’s fantastic! :) 

Cheers!


Just to clarify, the new feature is called “ True per-machine ”, and creates a separate metadata file (.VBM) for each workload (unlike the old format which created a unique vbm).

More info here:

https://helpcenter.veeam.com/docs/backup/vsphere/per_vm_backup_files.html?ver=120

And remember to upgrade the backup chain in order to apply the new format!

https://helpcenter.veeam.com/docs/backup/vsphere/backup_change_type.html?ver=120


Great thanks. Appreciate the replies. 

Have a great weekend all!


If you backup a LOT of similar servers, you won’t get the same dedupe/compression, but I found in production the loss of data reduction was minimal. The performance is significant.

 

It also adds portability to your VM’s. After sending my jobs to tape, I can restore a single VM rather than say 20 that are in the same job taking me significantly less time. 

 

I have been on the PER-VM jobs for quite a while now and it is a no brainer to switch. 


think how many identical bytes of OS data you’ve probably got in a backup job).

 

I generally think of this in OS terms.  That’s some savings when deduplicating the OS files, but in all, I think the flexibility of per-VM more than makes up for that inefficiency.  With that said, if you had a cluster of application servers or something like that which would have a large amount of duplicate data, then maybe that would be significant, but personally I like per-VM because it’s much simpler to isolate what is what on the filesystem, and when it comes time to delete a backup, you actually reclaim the space vs a per-job backup just creating whitespace in the backup files.


Honestly the dedupe savings from similar OS is not as good as you think…  It looks great, but it’s close when you have per-VM jobs. I think alot of that dedupe/compression comes into play with the unused space on the disks being a “savings” that is true on per-VM jobs too.

 

I tried grouping OS’s together, applications together, looking for the best ratios, and in the end it turns out, unless you have HUGE jobs, with a ton of VM’s, most likely it won’t be much different.

 

Every environment will be different, and there was some slight loss going to per-VM, but it was quite small.  It would be interesting to see some comparable numbers of before and after putting the same VM’s in the job. I was tracking this for a bit and stopped caring because it was so minor :) 

 

 


I have an issue while performing the “Upgrade Backup Chain Format”.
It fails with the following error:

Example job "D-HILEXCH":

8/03/2023 8:16:45 Failed Failed to upgrade backup D - HILEXCH Error: More than one password have been found for backup D - HILEXCH

Any ideas/feedback are welcome.


Did you change the encryption password in the last times?


Hello Joe,
no, we didn’t change the encryption password during the past 1024 days.

 


Do you get this error with this job only? Or with all jobs?


I get this error with all jobs.


Then I would suggest to open a support call with Veeam...


That’s what I did yesterday Joe, but no response till now.
I’ll post an answer whenever I get any feedback from Veeam Support.
Thanks anyway.


Support is probably under high workload at the moment because of the patches for VBR 11 and 12.


Today, after upgrading to v12, we started experiencing the same error “Error: More than one password have been found for backup...” only for 1 job that is of Windows Agent Backup type. Other jobs using VMware Backup are fine. Also there are other Windows Agent Backup type jobs which are fine also. However I remembered that I disabled that particular job prior to the upgrade (as it’s schedule was going to trigger it during the upgrade). I guess some upgrade script didn’t process that job as a consequence. After re-creating the job, all is fine now.


Thanks to all for posting your helpful comments.  Much appreciated, as I’m performing the Backup Chain conversion myself.  I want to clarify something here, though, as I think it will help newcomers to the process like myself, going forward. 

In spite of the fact that Veeam themselves use the term, this is not an ‘upgrade’.  Veeam have not recently introduced ‘Per-Machine Backup Chain’ format.  This format has already been available for at least 5 years (that’s how long we’ve been using VB&R).  Also already available is the ‘Single-File Backup Chain’ format.  Up to now, customers like ourselves have chosen which BC format to use based on the needs of the individual jobs, with some getting Per-Machine and some Single-File.  Per-Machine jobs get better performance and Single-File jobs get better data reduction.  What we have here is a deprecation of the Single-File option by Veeam, and customers subsequently having to convert these jobs to Per-Machine.  It’s the removal of choice.  Driving this is recent changes made to the product and Veeam’s desire to not support Single-File anymore.

I should clarify here that there that, while Per-Machine BC format is not new, a small change has been made to it recently in the introduction of Per-Machine Metadata.  This looks like it’s an easy conversion though and should not require an Active Full run usually.

Once I realised this, I found it helped me a lot to understand the change and plan BC conversion.  Much as I’ve really appreciated the Veeam products and support since I started using them over the years, I’ve been very disappointed with their messaging around this.  I may have missed something, but not once have I read or heard the word ‘deprecation’ used by them instead of ‘upgrade’.  Adding to this, while documentation exists, it has gaps and should be more comprehensive.


In spite of the fact that Veeam themselves use the term, this is not an ‘upgrade’.  Veeam have not recently introduced ‘Per-Machine Backup Chain’ format.  This format has already been available for at least 5 years (that’s how long we’ve been using VB&R).  Also already available is the ‘Single-File Backup Chain’ format.  Up to now, customers like ourselves have chosen which BC format to use based on the needs of the individual jobs, with some getting Per-Machine and some Single-File.  Per-Machine jobs get better performance and Single-File jobs get better data reduction.  What we have here is a deprecation of the Single-File option by Veeam, and customers subsequently having to convert these jobs to Per-Machine.  It’s the removal of choice.  Driving this is recent changes made to the product and Veeam’s desire to not support Single-File anymore.

 

I don’t recall the exact reason that per-job single-file backups were deprecated, but as best I can remember, some of it had to do with added flexibility of each workload having it’s own backup chain.  While there was a performance impact of using single-file, the space savings eventually became not worth the hit in flexibility.  I believe there was a technical reason as well in which the single-file limitation prevented something from happening properly going forward.  Either way, I never found an advantage to staying with single-file, even on more space-constrained repositories.  In fact, last week I happened across receiving low space alerts on their repository, but deleting backups there we no longer needed yielded no space savings on the drive.  Of course, I found that they were using single-file on their older repository so files were not actually being deleted from disk.  Again, that lack of flexibility was what bit me there.  In this case, I can’t just deleted the backup data to start a new per-VM backup chain and there isn’t enough space to start a new backup chain, so I’m somewhat stuck there for the time being.  I get your note of removal of choice, but that choice is actually making things harder for me.  Had that not been an option, I would have been able to clean up the repository much more easily.

 

I should clarify here that there that, while Per-Machine BC format is not new, a small change has been made to it recently in the introduction of Per-Machine Metadata.  This looks like it’s an easy conversion though and should not require an Active Full run usually.

 

The conversion from a single metadata file (per-job) to a separate file for each workload (per-VM) is a simple conversion if already using per-vm backup chain formats.  Same customer I noted above - I was able to upgrade on a separate job/repo (VCC copy job) that were per-VM with a single metadata file to a per-VM with separate metadata files.  I don’t think the conversion took a minute per-vm as advertised…..I think it was about 4 minutes to upgrade the chain format for somewhere between 7 and 10 VM’s within a job.