Skip to main content

We have a large windows fileserver with many disks/shares and have been backing them up with an agent with separate jobs for each drive selecting the Volume. Im wondering that since its a VM should I have another VM job that backups the VM but just the C drive. Since we are currently only backing up volumes I wonder if we could actually recover the entire server.

One of the best approaches for backing up large VMs (large virtual disks) is to leverage storage snapshot integration. This helps in two ways. 

  1. The VMware snapshot is only open a few seconds and all the blocks are pulled from the storage snapshot. A long running VMFS snap is problematic, especially when consolidated, so this relieves that issue.
  2. The storage snapshot is an excellent recovery source - bringing back a very large VM or a large VMDK from a snap is much faster than a restore.

You should always back up the system drive, to enable recovery of the entire file server.

Another factor to consider is block size. Increasing the block size to 4 MB enables juch faster recovery, at the price of increased size of the incremental. This can be totally worth it if you can restore a 20 TB file server in less than a day rather than over a day, from backup.

If you don’t have storage snapshot integration, a full VM backup is still adviseable, unless the change rate is so high or source storage performance is so bad you can’t complete the full backup in a reasonable backup window.


You would need to restore each volume one at a time using the Agent.   What is the reason to not back up the VM using Veeam via VMware instead of the Agent?  That is the easiest way to recover the server typically as the Agent requires booting the ISO file for recovery and having access to the repository where the backups are stored.


I would not be using the agent for this.  I’d backup the VM with an image level backup.  In the past, due to the large size of the drives on the file server, I did have multiple jobs for the same server, and different drives went to different repo’s.  I was able to do this by excluding certain drives in each job.  For instance, a couple of drives were archive data, and a couple of drives were production data.  So I had a production job that excluded the archive drives and backed up to the production repo, and an archive job that excluded the production drives and backed up to the archive repo.  By doing this, I could also run less archive backups or adjust retention differently for that date, etc.  I didn’t do that, but I could.  You can access these exclusion by going into your backup job and under Virtual Machines, select Exclusions > go to the Disks tab > select the VM > Edit > select “Selected disks” and then add the disks you want to process via their SCSI ID. It would look something like the below.

 

 

 


Comment