Skip to main content

The Veeam backup server operates on a 1-gigabit network, while the VMware proxy server and the data domain storage are on 10-gigabit networks. Despite this, the backup speed is consistently reaching only 270 to 300 Mb/s per second. Is there something overlooked in this setup? Additionally, would incorporating a WAN accelerator result in any performance improvements for the copy job over the WAN link?

That’s normal speeds over such a network. I get similar speeds...sometimes 500-700MB (rarely, I’ll get 1GB/s speeds). Many factors determine backup & copy speeds. You can start by look at 1 of your backup jobs and view the 4 bottleneck metrics → source, proxy, network, target (source & target refer to stroage resources). You can start there...but I think your speeds are good.

As for Copy jobs...it is only recommended to use WAN Accelerators for WAN speeds less than say 500Mb (forgot exact speed). Otherwise, you just do a “direct” -configured copy job. But, even for Direct transport, you should be using Proxies at source & target.


We would need a lot more information. Just because the proxy is on a 10Gb network, what backup method are you using? Is it a physical or virtual proxy?  What is your storage on the production side? 

The Veeam server is just scheduling the jobs, as long as it’s not being used as a proxy itself. 

What does the Veeam console say the bottleneck is?

This could be anything from storage to CPU, to something else. 

As far as a WAN accelerator, it’s not going to make things faster. It will give you some dedupe/compression and handle sending the data in a better way over a slow link.  What is your WAN link speed you are transferring over?

 


We would need a lot more information. Just because the proxy is on a 10Gb network, what backup method are you using? Is it a physical or virtual proxy?  What is your storage on the production side? 

The Veeam server is just scheduling the jobs, as long as it’s not being used as a proxy itself. 

What does the Veeam console say the bottleneck is?

This could be anything from storage to CPU, to something else. 

As far as a WAN accelerator, it’s not going to make things faster. It will give you some dedupe/compression and handle sending the data in a better way over a slow link.  What is your WAN link speed you are transferring over?

 

@Scott 

Backup Source: Vmware Cluster

What backup method are you using? Incremental

Is it a physical or virtual proxy? Virtual proxy and it's on Vmware Cluster

What is your storage on the production side? DataDomain and mapped in Veeam server as DDBoost 

What does the Veeam console say the bottleneck is? Target / Proxy varies in each job

Veeam Server is on 1 gig, and DataDomain and Vmware Cluster is on 10 gig.

 


By Source, or production storage, I mean the Production side, What the VMware VM’s are running on.  Just to get an idea of that as well.

 

Does it till you the transport mode of the proxy in the jobs? (HotAdd, NBD)

 

 

 

 


Production Storage is on same 10 gig network and it’s Dell Vxrail , Transport mode : attached pic 

 


Right, but if you look at the job history (Success, Fail, Warnings area) and check out some of the stats where it tells you the bottleneck etc, it should tell you what transport you are using.

 

If you are using virtual proxy, HotAdd works great, but you need to have a proxy on every host in the cluster for that. If not it’ll most likely be using NBD.  

If you have a job with a single VM in it, and put the Veeam Proxy on the same host, it should use hot add, if the job contains a VM from a host that the proxy ISN”T on, it’ll use NBD. If you have slower ports for the VMware management, this can cause all kinds of problems.

My Virtual proxies are not enabled and for emergencies only as I use storage snapshots or direct SAN backups.  

 

This entire best practices guide is super helpful if you have some time to read however.

https://bp.veeam.com/vbr/2_Design_Structures/D_Veeam_Components/D_backup_proxies/vmware_proxies.html

 

 

 


Right, but if you look at the job history (Success, Fail, Warnings area) and check out some of the stats where it tells you the bottleneck etc, it should tell you what transport you are using.

 

If you are using virtual proxy, HotAdd works great, but you need to have a proxy on every host in the cluster for that. If not it’ll most likely be using NBD.  

If you have a job with a single VM in it, and put the Veeam Proxy on the same host, it should use hot add, if the job contains a VM from a host that the proxy ISN”T on, it’ll use NBD. If you have slower ports for the VMware management, this can cause all kinds of problems.

My Virtual proxies are not enabled and for emergencies only as I use storage snapshots or direct SAN backups.  

 

This entire best practices guide is super helpful if you have some time to read however.

https://bp.veeam.com/vbr/2_Design_Structures/D_Veeam_Components/D_backup_proxies/vmware_proxies.html

 

 

 

@Scott Thank you for dedicating time to address my question and providing detailed explanations. This not only benefits me but also others who may come across similar queries. I appreciate your time and effort.

  • Backup proxy "vmwareproxy-01" for Hard Disk 1 nbd].
  • The Management Network operates on a 10 gig connection.

I am utilizing VMware vSphere VSAN as storage, which is considered as DAS (Direct Attached Storage). The setup consists of a three-node cluster with one VMproxy. The VMs to be backed up are distributed among the three hosts, and they move around. Ideally, they should be evenly distributed across the three nodes. However, it's also possible to keep all backup-worthy VMs on a single host. Also, If necessary, proxy VMs can be configured on each node.

In conclusion:

 

**Scenario 1:**


For this scenario, I want to consolidate all VMs to be backed up on a single host. Simultaneously, the VM proxy should be located on the same host with HotAdd enabled.

 

**Scenario 2:**


If the VMs to be backed up are spread across the three nodes, I need to have a VM proxy on each node with NBD enabled.


I want to clarify Scott’s comment about having a VM Proxy on every Host. This is a somewhat inaccurate statement. Proxies are required on every Host only if your vSphere environment Hosts use local storage. And, by local storage...I don’t mean with use for vSAN. According to the Guide, as well as my recent modifications of using hotadd myself, you just need at least only 1 VM Proxy per Cluster, but I recommend 2 for redundancy purposes. And, let me clarify my “per Cluster” comment. What the underlying requirement is for your Proxies to use hotadd mode is the VM Proxy(ies) need to be able to access the same underlying storage (i.e. Datastores) as your production VMs you’re backing up reside on.

For vSAN specifically, just like if you were running any shared storage, you just need at least 1 VM Proxy in your vSAN Cluster (but again, I recommend 2). You can read more about Transport Modes with vSAN from the Guide here. You shouldn’t need to consolidate your VMs to 1 Host for them to be backed up with the best Transport Mode, which in this case is hotadd. Keep them as they are...they should back up using hotadd just fine, because all your Hosts have access to the same storage (vSAN Datastore). You can verify your jobs are using hotadd by double-clicking on a given job and viewing a few VMs in the job and make sure your Proxies are using the nhotadd] mode:
 

 


Comment