Skip to main content

Hi All, it’s my first post here.

I’m evaluating Veeam in my lab.  I find backups are at a fast 93MB/s, but restores are slow at 3MB/s.

-ESXi 7.0 Update 3 host running on a i7 system with 64GB, SSD and HD storage and a 1G NIC

-VBR 12.0.0.1420 running on an i5 system with 16GB, with two 1G NICs, NIC1 connects to the lab network with the ESXi server, NIC2 connects to the main network and NAS storage where the backups are stored

-NAS is capable of sustained 112MBs read and write speeds

-when I do a backup of any of the VMs, it runs at about 91MB/s

-when I do a restore of those VMs, including the ones stored on the SSD, restore speed is 3MB/s

-overnight I restored 3 VMs, 2 are under 100GB and finished, the third restore is about 1.6TB and completed only 9% in 7 hours, at that rate it will take 77.7 hours, the backup of that VM took 4 hours

-if I use Instant Recovery performance is reasonable, for example with a Windows Server VM, I can copy files from the windows server to a client computer at the rate of about 50MB/s.  I see traffic running at that speed from the NAS to the VBR server to the ESXi server

-writes to the Windows Server are reasonable also, and I see the data written to a temp folder on the VBR, I don’t remember the exact speed but it was over 25MB/s

-that should be good evidence that the is not a bottleneck in the NAS, VBR server or ESXi host

-when I initiate the migration, (so that the files are copied to the ESXi host) it runs at 3MB/s

-large file rights and reads to the Windows server while the migration is running (and even before starting the migration) run a resonable speeds, so again staturation issue

-CPU usage on both the ESXi server and VBR server are very low, under 15%

-on the VBR server, I log into ESXi, and do an upload of a file stored on the NAS or local drive and it runs at 30MB/s

Any ideas what I can do to get faster restore speeds.  I’ll even be happy with 30MB/s, but 90MB/s (close to gigabit speeds) would be nice. 3MB/s is just too slow.

Do you have any Traffic throttling enabled?

Enabling Traffic Throttling - User Guide for VMware vSphere (veeam.com)

Additionally, do you have any proxy servers setup or are you running everything from the B&R server itself? 

VMware Backup Proxy - User Guide for VMware vSphere (veeam.com) 


Thanks for you’re reply.  I’m using the free “Veeam Backup and Replication Community Edition”.  I don’t believe it has the throttling options.  I could not find them.  And there is no proxy server.  Just a NAS for storing the backups, a Windows machine running VBR Community Edition and the ESXi server.

 


I have a few more questions. 

Are you restoring back to the same LUN where you are backing up from? Have you tried a different LUN? 

Feels like it is performing a restore over the Management network. 

Check these out:


Please clarify the term LUN in this context.

I’m restoring to the same ESXi host that was backed up.  I’m actually restoring to the VM that was backed up, as I practising a Active Directory Migration scenario.

The NAS is a simple Windows SMB connection, eg \\nas\backups\veeam, and not a SAN  Yes, the VBR is restoring over the management LAN.  But I would expect faster than the 3MB/s (or about 40Mbps over the gigabit connection).  The ESXi server is running a lab environment so there is no other traffic or load on the ESXi server.  I have only one ESXi server in the lab.

I copied the *.vbk files from the NAS to the Veeam Backup and Replication server and initiated a “Entire VM Restore” and get the same speed of about 3MB/s instead of the backup speed of about 93MB/s.  So that eliminates the NAS as being a problem.

As I mentioned in my original post, if I use the Instant Recovery, the VMs speed to the disks is reasonable, even thou it’s pulling data from the VBR server over the management LAN.  Backups are fast.  It’s just restoring to the ESXi host that is slow.

Any other thoughts?


Check your routing forth and back. With dual honed systems I see often issues with network paths, e.g restore running over your main network on a path you would not expect.


Thanks for the input StefanZi.

I copied the vbk files from the NAS to a local drive on the Veeam backup server, open the vbk file and initiated a restore, removing the NAS (and second network interface) from the equation.  The issue still happens.

I also used Wireshark to confirm no weird behaviour on the network.  All looked good, no packet drop or duplicate packets.

 

 

 


I found https://forums.veeam.com/vmware-vsphere-f24/veeam-11-extremely-slow-replication-job-t74251-60.html

In summary it appears to be an issue with ESXi7.0.   I haven’t digested everything in the three pages of the the post.  For many it started when they upgraded from ESXi 6.X to ESXi7.  I’ll keep following for hopefully a resolution at some point.

I’m new this Veeam and have limited ESXi experience.  Some have mentioned that “hot  add” works.  Is “hot add” method available in Veeam Backup & Replication Community Version?  If yes, can some one point to some resource that explain how to do it.

Thanks in advance.


I found https://forums.veeam.com/vmware-vsphere-f24/veeam-11-extremely-slow-replication-job-t74251-60.html

In summary it appears to be an issue with ESXi7.0.   I haven’t digested everything in the three pages of the the post.  For many it started when they upgraded from ESXi 6.X to ESXi7.  I’ll keep following for hopefully a resolution at some point.

I’m new this Veeam and have limited ESXi experience.  Some have mentioned that “hot  add” works.  Is “hot add” method available in Veeam Backup & Replication Community Version?  If yes, can some one point to some resource that explain how to do it.

Thanks in advance.

Good find!

Hot-add is available in CE. You need to add a Windows or Linux VM to the VBR infrastructure to which disks from the target datastore can be attached, in your case with local disks you need to run a hot-add proxy VM on the source ESXi host and add it to your Backup Infrastructure as a vSphere Proxy.

User guide explains how it works and whats needed.

Virtual Appliance (HotAdd) - User Guide for VMware vSphere (veeam.com)

Btw about the networking: When you checked with Wireguard that the right interface is used, all good. I thought of the restore way from VBR to ESXi to take probably the wrong route. Don’t know your full network setup of course but if you checked it’s taking the correct link - all good.


Hi @DomDeFran 

 

You can edit proxy settings here:https://helpcenter.veeam.com/docs/backup/vsphere/backup_proxy_edit.html?ver=120

Here’s the detail on transport modes: https://helpcenter.veeam.com/docs/backup/vsphere/transport_modes.html?ver=120

 

In essence, if you’ve got your proxy set to automatic, or configured for hotadd, you need to have a proxy running as a VM hosted on the same ESXi server as the VM you’re trying to protect or restore to.

 

You could create Linux proxies if you’ve got multiple hosts you need to use hot-add for, if you’re semi-comfortable with Linux, and constrained for Windows licenses.


I’d try and add a proxy VM to use hotadd to test and see if that helps. 

 

What is your VMware network speed and mgmnt network speeds. Sounds like it could be using the later.

 

For a test, you could create an active full, then try and restore it right away as it would be a pretty sequential read for testing.

 

I don’t know what kind of stats you can get out of the NAS, or the type of NAS, or the network speed of the NAS. It could be a bit limited, but you did mention SSD which is good… 

 

What does Veeam say the bottleneck is? 


Hey Scott,

Everything in the test network is 1G with no load.  I tried copying the vbk file to a local drive on the Veeam backup server and it was the same speed, averaging about 4MB/s or 30Mbps.  So we know its not NAS.  I also have no problems pulling files from the NAS at a full Gigabit/sec.

I posted a link earlier this morning on the Veeam forms that talks about the issue.  Seems to be WMWare 7 issue.

I setup an agent on a Win10 VM running on the host and it restores at close to 100MB/s, so that’s a work around.  But in my opinion, having a Win10 VM hanging around just to speed up restores is not an ideal solution. Also, it adds another layer of complexity for a very simple setup.  I hope the issue gets fixed so we can use the NBD transport layer.

Just as I was about to click SEND, I glanced over at the restore, and it failed after 7 minutes.  Boo!!  Solve one problem, face another. 


Hey Scott,

Everything in the test network is 1G with no load.  I tried copying the vbk file to a local drive on the Veeam backup server and it was the same speed, averaging about 4MB/s or 30Mbps.  So we know its not NAS.  I also have no problems pulling files from the NAS at a full Gigabit/sec.

I posted a link earlier this morning on the Veeam forms that talks about the issue.  Seems to be WMWare 7 issue.

I setup an agent on a Win10 VM running on the host and it restores at close to 100MB/s, so that’s a work around.  But in my opinion, having a Win10 VM hanging around just to speed up restores is not an ideal solution. Also, it adds another layer of complexity for a very simple setup.  I hope the issue gets fixed so we can use the NBD transport layer.

Just as I was about to click SEND, I glanced over at the restore, and it failed after 7 minutes.  Boo!!  Solve one problem, face another. 

 

I’m not resource bound at work, but having so many servers for single tasks used to bother me too. I’ve come to accept it,   I have many Veeam servers now for different tasks and things fun great.   The nice thing is almost anything can run as a proxy so if a proxy fails another can take over.  

 

I’m on 7.3 so I don’t have any issues with speeds. I also use storage snapshots which will max out my 16GB fiber connections on our SANs.  

 

4MB/s is quite slow though. Let us know if updating VMware solves that for you.


inside the VM try disabling “IPv4 Checksum Offload” in the vmnic properties

went from 3MB to 40MB

 


inside the VM try disabling “IPv4 Checksum Offload” in the vmnic properties

went from 3MB to 40MB

 

Chiming in here to say this fixed the slow restore issues I was having with HyperV on Server 2022 when restoring from a NAS to alternate host. Exception being I disabled IPv4 Checksum Offload on the host network adapter (TpLink TX401). Went from 4MB/s to 130MB/s (2.5G connection NAS to 10Gbe Host). Thank you!


Comment