Skip to main content

I just had a situation in a customer environment where the customer complained that a restored VM had more disks than the one that was backed up before (6 instead of 5 for a SQL Server VM).

The recovery was done using Entire VM Restore - User Guide for VMware vSphere (veeam.com).

When restoring to the original location, Veeam prompts you that the original VM will be deleted first:

At first, I though this was related to the user that Veeam uses to access vCenter no having the appropriate rights. For example, the user does not have permissions to delete the VM.

But then I noticed that the additional disk to be 200GB in size. The old state the we wanted to restore only had 2x100, 2x300 and 1x500. So where did the 200GB disk come from?

It turns out that the size of this disk was adjusted by the VMware team a few weeks ago (100 → 200). We wanted to restore from an older state with the 100GB still there. Could this be the reason? Does VBR leave resized disks being alone while recovering and adds the older state of those disks on top?

Actually no. I just tested this in my demo environment. So the pure size-adjustment does not give you excess disks.

The real reason is: The VMware team, for some unknown reason, also adjusted the SCSI ID of the same disk (2:0 → 2:1). Veeam actually keeps disks of a VM being overwritten, if none of the disks to be restored has the same SCSI ID.

So the dialog shown above here is misleading. The VM is not deleted, it is just replaced in all the things we have in the backup. Things we don’t have in the backup, like differing SCSI IDs, are left alone.

So no hallucination involved - but still good to know what’s going on. 😎

@haslund had a question about disk restores similar to this, but for the life of me I can’t remember what exactly it was about. Rasmus?...what was that question? Is it on your blog? 


That is definitely interesting and good to know. Thanks for sharing this Michael.


@haslund had a question about disk restores similar to this, but for the life of me I can’t remember what exactly it was about. Rasmus?...what was that question? Is it on your blog? 

Not sure which one, but there was one with physical RDM converted to VMDK if not restored correctly… maybe that was it?


@Michael Melter thc for information 😉

Personally, as a precaution, I never restore by overwriting the original Vm.
I always proceed with restor Other location and rename VM _restored.
another similar case may happen that for some reason you need to exclude some disks from the backup process and if you are not careful to uncheck this option, the vm will not start after the restore :D

 


@Michael Melter thc for information 😉

Personally, as a precaution, I never restore by overwriting the original Vm.
I always proceed with restor Other location and rename VM _restored.
 

Valid point in general. But doing that you miss certain very useful points with going on top of the VM:

  • Quick rollback using CBT → much faster, especially for the large VMs with few changes (filers, etc.)
  • Keeping your MoRef ID → you will have to re-add your VMs to your backup jobs
  • You will need twice the disk space for some time as you recover next to the original VM
  • More config efforts/settings to be taken care of during the restore (Datastore etc.) → less process stability

My #1 choice is most of the time:

  1. Quick-Backup of the “broken” state
  2. Recovery on top of the original VM (using quick-rollback)

Thus you still have the “broken” state to e.g. recovery data from and still can enjoy the benefits mentioned. Just my 2ct. 


@Michael Melter thc for information 😉

Personally, as a precaution, I never restore by overwriting the original Vm.
I always proceed with restor Other location and rename VM _restored.
 

Valid point in general. But doing that you miss certain very useful points with going on top of the VM:

  • Quick rollback using CBT → much faster, especially for the large VMs with few changes (filers, etc.)
  • Keeping your MoRef ID → you will have to re-add your VMs to your backup jobs
  • You will need twice the disk space for some time as you recover next to the original VM
  • More config efforts/settings to be taken care of during the restore (Datastore etc.) → less process stability

My #1 choice is most of the time:

  1. Quick-Backup of the “broken” state
  2. Recovery on top of the original VM (using quick-rollback)

Thus you still have the “broken” state to e.g. recovery data from and still can enjoy the benefits mentioned. Just my 2ct. 

 

I prefer quick rollback if needed although I often would use instant recovery in a lot of cases, but it really just depends on the recovery requirements.  Disk space is a great argument, especially if it’s a large VM that’s being recovered.  And I like that you mentioned this, but anything that avoids mapping MoRef ID’s is also welcome.


Thx @Michael Melter for sharing this! Is a very good thing to know that in fact Veeam does not delete the existing content but just overwrites what’s in the backup. I didn’t know that. I thought everything was deleted as being mentioned in the dialog. Perhaps better that Veeam changes the dialog from delete to overwrite?


Thx @Michael Melter for sharing this! Is a very good thing to know that in fact Veeam does not delete the existing content but just overwrites what’s in the backup. I didn’t know that. I thought everything was deleted as being mentioned in the dialog. Perhaps better that Veeam changes the dialog from delete to overwrite?

I’d agree. Maybe even update the dialog box with an explanation if what occurred. 


Interesting. Thanks for sharing this. I would have been surprised too!


Comment