Skip to main content

good mornig, i’m a new user for this comunity , I have a question with vmware essential 8.0 u1  and veeam  comunity edition.

I  try to create a network 10gb between two host vmware , first host DELL T630 32GB RAM RAID 5 SAS CPU 8 CORE  vmware essential 8.0 u1 latest patch with intel x520 da2 card and 1gb card , second host  DELL T640 32GB MAMORY RAID 5 SAS CPU 10CORE vmware 8.0u1 latest patch card intel x520 da2 and 1gb card , vm veeam 12 comunity latest patch 4 CORE  8GB RAM 400GB HDD. network witch 1gb and mikrotik switch 10gb.

I don't reach 10gb speed  with vm copy between t630 e t640 only 1,2 gb max why?  

Vm copy job  with external (no t630)veeam repository  2,4 gb speed max 

I need help to resolve the problem .

I do not understand if the problem is vmware restriction or veeam backup 

 

Hi @csvtuno - you mentioned twice in your post above you have 1gb cards on your Hosts. Did you mean 10gb cards?

Regardless, whatever throughput card you have, you’ll never reach it’s full throughput speed due to network overhead of TCP/IP packet capsulation/decapsulation, as well as OS and router and/or switch overhead as well. 1.2-2.4GB is pretty good to be honest.

This wiki post and this blog post can explain more in detail the overhead.


No, on my host I have card of 1gb on switch 1gb and card 10gb on switch of 10gb

Thanks


Then at most you should get is 1gb because you have 1gb cards in the network path (actually...you should probably never more than 800-900mb). Not sure how you even got 1.2-2.4gb with 1gb cards in your path.


I create on host VMware group Port with card 10gb and I have selected this Port group in all VM on my host but job copy VM veeam have 1gb max speed, it's normal?


How are the networks setup?

Is your management network on 1GB and a separate 10GB network to your repositories?


Hi @csvtuno - look at this diagram from the Veeam User Guide on how Backup Copy job traffic works. If you have a 1gb card at any point within the diagram, you won’t be able to get any faster than what you’re getting. And again, what you’re currently getting is pretty good.


Only 10gb card and only switch 10gb  speed network  minimum?


How are the networks setup?

Is your management network on 1GB and a separate 10GB network to your repositories?

Was going to ask this as Veeam uses MGMT network for backups (vmkernel port) and if 1GB then that is the speed you get otherwise you need to move everything to the 10GB.


If you’re not using proxy servers on each host, the data is going to have to flow through your Veeam server which, if I understand correctly, is connected via a 1Gb link since it only has 1Gb cards.  If you have a proxy server on each host, they should be able to communicate with each other directly through the host’s 10Gb NIC’s.  That said, you of course won’t get full 10Gb throughput due to overhead.  One other thing that was noted is that without a proxy server on-host, your Veeam server will backup over the management NIC of each host and that is throttled by VMware to maintain host health.  Adding a proxy server allows it to pass the traffic as a VM and isn’t subjected to the same throttling restrictions.


You are also able now in vSphere 7 and higher to add a vmkernel port for backups that Veeam will use to send traffic instead of the default port for mgmt.  Something to look in to.

Check this post on the community - https://community.veeam.com/blogs-and-podcasts-57/how-to-isolate-nbd-backup-traffic-in-vsphere-977

 


You are also able now in vSphere 7 and higher to add a vmkernel port for backups that Veeam will use to send traffic instead of the default port for mgmt.  Something to look in to.

Check this post on the community - https://community.veeam.com/blogs-and-podcasts-57/how-to-isolate-nbd-backup-traffic-in-vsphere-977

 

I did not know this was a thing….amazing and I’ll have to dive down this one.  Thanks for the link on this Chris!


You are also able now in vSphere 7 and higher to add a vmkernel port for backups that Veeam will use to send traffic instead of the default port for mgmt.  Something to look in to.

Check this post on the community - https://community.veeam.com/blogs-and-podcasts-57/how-to-isolate-nbd-backup-traffic-in-vsphere-977

 

I did not know this was a thing….amazing and I’ll have to dive down this one.  Thanks for the link on this Chris!

No problem. I am testing in my lab now. 😎


You are also able now in vSphere 7 and higher to add a vmkernel port for backups that Veeam will use to send traffic instead of the default port for mgmt.  Something to look in to.

Check this post on the community - https://community.veeam.com/blogs-and-podcasts-57/how-to-isolate-nbd-backup-traffic-in-vsphere-977

 

I did not know this was a thing….amazing and I’ll have to dive down this one.  Thanks for the link on this Chris!

No problem. I am testing in my lab now. 😎

Ohhh, such a thing”

I normally add a secondary virtual nic to my veeam server, wich is connected to a vnic dedicated for the 10gb network, also added the proxy servers, the storage destiny and the preferred network at the Veeam B&R Configuration.
VeeamB&R will use management to talk to vcenter, but then the backups will be moved / copy through the dedicated network.

cheers.


Move your kernel port or check what transport method you are using. 99% of the time you are using the 1Gbps mgmnt port when you expect to be using the 10. 


Comment