Skip to main content

Hey everyone! 

Happy Friday 😃

I had a quick question regarding Per-VM Backup chains. If I remember rightly, I think I read somewhere that in V12 it would be enabled by default. 

Are Per-VM backup chains now best practice? Anything to be aware of when transitioning to them on already existing backup jobs that do not utilise Per-VM backup jobs currently?

Thank you!

For backup copy it’s now the only option I believe, except Veeam Agent for Windows Failover Cluster backups I vaguely remember.

 

The main downside to per-VM backup files is that it’ll consume more space vs a per-job backup file due to the lack of data efficiency techniques being processed across VMs (think how many identical bytes of OS data you’ve probably got in a backup job).

 

But on the plus side, it’s awesome from a performance stand-point.

 

If you’re upgrading from v11, be sure to upgrade your metadata format as <v12 had a per backup job metadata file instead of per VM metadata file. It’s a manual process and you can’t run the backup job whilst it’s migrating the metadata.


The per-VM Chains are needed for moving single VMs from Job to Job.

And it will not be updated automatically after the upgrade to V12.


After change to per-VM Chain, an Active-Full backup is automatically created on the next job run.


Thank you :) 


If you use a dedup storage appliance or XFS/ReFS fast clone, duplicated data shouldn’t come into play. Regardless...from a performance perspective, it’s fantastic! :) 

Cheers!


Just to clarify, the new feature is called “ True per-machine ”, and creates a separate metadata file (.VBM) for each workload (unlike the old format which created a unique vbm).

More info here:

https://helpcenter.veeam.com/docs/backup/vsphere/per_vm_backup_files.html?ver=120

And remember to upgrade the backup chain in order to apply the new format!

https://helpcenter.veeam.com/docs/backup/vsphere/backup_change_type.html?ver=120


Great thanks. Appreciate the replies. 

Have a great weekend all!


If you backup a LOT of similar servers, you won’t get the same dedupe/compression, but I found in production the loss of data reduction was minimal. The performance is significant.

 

It also adds portability to your VM’s. After sending my jobs to tape, I can restore a single VM rather than say 20 that are in the same job taking me significantly less time. 

 

I have been on the PER-VM jobs for quite a while now and it is a no brainer to switch. 


think how many identical bytes of OS data you’ve probably got in a backup job).

 

I generally think of this in OS terms.  That’s some savings when deduplicating the OS files, but in all, I think the flexibility of per-VM more than makes up for that inefficiency.  With that said, if you had a cluster of application servers or something like that which would have a large amount of duplicate data, then maybe that would be significant, but personally I like per-VM because it’s much simpler to isolate what is what on the filesystem, and when it comes time to delete a backup, you actually reclaim the space vs a per-job backup just creating whitespace in the backup files.


Honestly the dedupe savings from similar OS is not as good as you think…  It looks great, but it’s close when you have per-VM jobs. I think alot of that dedupe/compression comes into play with the unused space on the disks being a “savings” that is true on per-VM jobs too.

 

I tried grouping OS’s together, applications together, looking for the best ratios, and in the end it turns out, unless you have HUGE jobs, with a ton of VM’s, most likely it won’t be much different.

 

Every environment will be different, and there was some slight loss going to per-VM, but it was quite small.  It would be interesting to see some comparable numbers of before and after putting the same VM’s in the job. I was tracking this for a bit and stopped caring because it was so minor :) 

 

 


I have an issue while performing the “Upgrade Backup Chain Format”.
It fails with the following error:

Example job "D-HILEXCH":

8/03/2023 8:16:45 Failed Failed to upgrade backup D - HILEXCH Error: More than one password have been found for backup D - HILEXCH

Any ideas/feedback are welcome.


Did you change the encryption password in the last times?


Hello Joe,
no, we didn’t change the encryption password during the past 1024 days.

 


Do you get this error with this job only? Or with all jobs?


I get this error with all jobs.


Then I would suggest to open a support call with Veeam...


That’s what I did yesterday Joe, but no response till now.
I’ll post an answer whenever I get any feedback from Veeam Support.
Thanks anyway.


Support is probably under high workload at the moment because of the patches for VBR 11 and 12.


Today, after upgrading to v12, we started experiencing the same error “Error: More than one password have been found for backup...” only for 1 job that is of Windows Agent Backup type. Other jobs using VMware Backup are fine. Also there are other Windows Agent Backup type jobs which are fine also. However I remembered that I disabled that particular job prior to the upgrade (as it’s schedule was going to trigger it during the upgrade). I guess some upgrade script didn’t process that job as a consequence. After re-creating the job, all is fine now.


Comment