Skip to main content

A new feature in vSphere 7 is the ability to configure a VMkernel port used for backups in NBD (Network Block Device) respectively Network mode. This can be used to isolate backup traffic from other traffic types. Up to this release, there was no direct option to select VMkernel port for backup. In this post I  show how to isolate NBD backup traffic in vSphere.

Configuration

It is quite simple to configure backup traffic isolation. With vSphere 7 there is a new service tag for Backup: vSphere Backup NFC. NFC stands for Network File Copy. By selecting this, vSphere will return the IP address of this port when the backup software asks for ESXi hosts address.

So all you need to do is to add a new VMkernel port, enable Backup service and set IP and VLAN ID. Host does not need to be rebooted. Now NBD backup traffic will be routed through this port.

 

 

 

How does it look like

For verification there are a few options. You can check log file or monitor ESXi network throughput. On my demo ESXi host, I created a VMkernel port vmk2 with backup tag. IP of this port is 10.10.250.1.

Log file

Probably your backup software create logs for each backup process. In case of VBR, a few log files are created. To get the information, what IP is used for backup traffic, you can search in log files: Agent.job_name.Source.VMDK_name.log in directory C:\ProgramData\Veeam\Backup\job_name of VBR server.

In the appropriate log file, you see IP address, returned from vCenter to connect for backup traffic. Here is a screenshot with enabled backup tag.

Compared to no tagged network port:

 

Monitor ESXi network

For realtime-monitoring I prefer esxtop in ESXi console. In this screenshot you see backup traffic on vmk2:

Compared to no tagged port for backup:

 

Veeam preferred Network

Question may arise, if it is possible to select ESXi VMkernel port with VBR natively. For this, I tried to use Preferred Networks to define my network of choice.

Answer: this does not work, backup traffic is still routed through the default management port of the host.

What about other way round: can preferred networks prevent the selection via tagging VMkernel port? The answer here is also no. Even if management network is configured as preferred, tagged port is used.

 

For more information - for example command line options - visit 

https://vnote42.net/2021/05/31/how-to-isolate-nbd-backup-traffic-in-vsphere/

 

It has turned out that there are issues with isolated backup traffic that is routed. In this situation, ESXi is not working as expected. I could re-create this situation in my lab; behavior was just strange!

According to VMware Support this will be re-viewed internally. So for now: do not use this feature, if you have to route the isolated backup traffic.

Perhaps I ammissing something, but why should I do this? It is used for NBD traffic only, if I understand correct.

We have configured two VLANs to our backup proxies and server. The vCenter management LAN connection  and a dedicated backup LAN connection. The backup VLAN is the prefered network connection in Veeam. So all backup data traffic is routed over the backup VLAN and the management data is routed over the management  VLAN.

 


Great article @vNote42 


Mhh…. the management traffic is routed over the management network, but the data traffic is routed reliable over the backup network.

And do I really want to use NBD only? We have a difference of at least one to five  - in some environments of one to ten - in speed between NBD and hotadd mode…

 

Interesting topic to discuss with all the architects here I think 😎


Perhaps I ammissing something, but why should I do this? It is used for NBD traffic only, if I understand correct.

We have configured two VLANs to our backup proxies and server. The vCenter management LAN connection  and a dedicated backup LAN connection. The backup VLAN is the prefered network connection in Veeam. So all backup data traffic is routed over the backup VLAN and the management data is routed over the management  VLAN.

 

I believe it is because even with the preferred network set up it will not use it and uses MGMT network for your hosts regardless as noted in the “Veeam Preferred Network” section.


Amazing content as always Mr @vNote42 


Mhh…. the management traffic is routed over the management network, but the data traffic is routed reliable over the backup network.

And do I really want to use NBD only? We have a difference of at least one to five  - in some environments of one to ten - in speed between NBD and hotadd mode...

I agree HotAdd is the way for speed, etc. but I think this was if Veeam cannot do one of the faster modes and defaults to NBD that it separates the traffic from management.  I need to test this in my lab and see what happens.  :grinning:


Good one !


Thanks @vNote42 


Nice reading as always @vNote42 


Nice post @vNote42 !

Cheers!


Hi @vNote42 , thx for sharing. I did not know it already that this is possible in vSphere right now. I always use and recommended my colleagues to use Hot-add method. I’m always using that if direct SAN is not possible or if VBR is virtualized with iSCSI repository. It’s working stable and faster than NBD, because of the limitation in vSphere.

Does that mean that using this new feature is eliminating the speed limitation or is it only meant to isolate the traffic?


Hi @vNote42 , thx for sharing. I did not know it already that this is possible in vSphere right now. I always use and recommended my colleagues to use Hot-add method. I’m always using that if direct SAN is not possible or if VBR is virtualized with iSCSI repository. It’s working stable and faster than NBD, because of the limitation in vSphere.

Does that mean that using this new feature is eliminating the speed limitation or is it only meant to isolate the traffic?

Good question Nico! I did not test performance with this new service tag. But I do not think it will be faster than before.

No question, direct and hotadd mode are quite faster. But NBD is the simplest to implement. For many VMs NBD is just fast enough. So I know quite some environments where larger VMs get backed up by using direct mode and a bunch of smaller and small ones with NBD mode. For the backup windows it is fast enough with no additional components (virtual proxies) needed. And NBD scales not that bad when VMs to backup are spread across more hosts. 


Mhh…. the management traffic is routed over the management network, but the data traffic is routed reliable over the backup network.

And do I really want to use NBD only? We have a difference of at least one to five  - in some environments of one to ten - in speed between NBD and hotadd mode…

 

Interesting topic to discuss with all the architects here I think 😎

Interesting, joe! In my testing I did not manage to use another network for backup with preferred networks. Could it be, you use a different name resolution in your backup environment? 


[update]

Just updated the post: Do not use this feature, if you have to route isolated backup traffic. Some strange things happens than!

 


Thx for letting us know that it’s not recommended to use when isolating the backup traffic @vNote42 .


Hi @vNote42. Nice post 👍

I configured a secondary vmkernel adapter, with the vSphere Backup NFC service configured on each esxi host. When configuring vmk1 The TCPIP stack indeed refers to the the gateway of vmk0. So what I did is referring to the gateway of the backup LAN. On the firewall we configured this gateway, however it is effectively not being used. While it is impossible to change the DNS of the default TCP stack, this results in adding entries in the hosts files of the esxi hosts. For the backup LAN we used 172.16.200.X/24 with gateway 172.16.200.1 defined on the new vmk1. For all servers that are hosting veeam roles like proxy or repository, we added a secondary network card. On the host file for the ESXi host ended up in adding these addresses in the 172.16.200.0/24 network to the host file. Also on the local servers hosting the proxy and repository roles we added the same additional entries as on the ESXi hosts. We did not added the gateway on these hosts for the secondary nic. This all results in seperating NBD traffic from the management traffic. With esxtop you can confirm this separation. It works, however it is a pretty custom solution 😉


Thanks for the update. Interesting to see this one.


Hi @vNote42. Nice post 👍

I configured a secondary vmkernel adapter, with the vSphere Backup NFC service configured on each esxi host. When configuring vmk1 The TCPIP stack indeed refers to the the gateway of vmk0. So what I did is referring to the gateway of the backup LAN. On the firewall we configured this gateway, however it is effectively not being used. While it is impossible to change the DNS of the default TCP stack, this results in adding entries in the hosts files of the esxi hosts. For the backup LAN we used 172.16.200.X/24 with gateway 172.16.200.1 defined on the new vmk1. For all servers that are hosting veeam roles like proxy or repository, we added a secondary network card. On the host file for the ESXi host ended up in adding these addresses in the 172.16.200.0/24 network to the host file. Also on the local servers hosting the proxy and repository roles we added the same additional entries as on the ESXi hosts. We did not added the gateway on these hosts for the secondary nic. This all results in seperating NBD traffic from the management traffic. With esxtop you can confirm this separation. It works, however it is a pretty custom solution 😉

Thanks for your detailed solution! Custom is great as long as it works for your environment and is not too complex 😂


Comment