Skip to main content
Question

Hyper-V - Network Segmentation

  • December 15, 2025
  • 3 comments
  • 38 views

Stabz
Forum|alt.badge.img+8

HI folks,
We are increasingly deploying Hyper-V environments following recent changes in VMware’s pricing model. In my designs, I typically separate production and backup infrastructures into dedicated VLANs.

However, when using an on-host configuration, applying this architecture means that traffic will inevitably pass through the firewall, which could lead to excessive load and negatively impact backup performance.

The most straightforward approach today is to implement both Veeam and Hyper-V components within the same subnet to avoid any routing. 
 

How are you currently handling this in your environment?

3 comments

lukas.k
Forum|alt.badge.img+13
  • Influencer
  • December 15, 2025

Hi ​@Stabz,

Pretty easy from a security pov: You need to have proper segmentation in place.

 

I always recommend customers to have a proper segmentation in place, dedicated to the DR environment (which Veeam should be declared as part of). You could have those VLANs:

Data VLAN: for communication between VBR, repo, proxy, etc.

MGMT VLAN (dedicated for DR!): for OOBM of tape libraries and physical components

Immutable VLAN: for OOBM of immutable storages

 

With that you shouldn’t use a single VLAN for both Veeam and Hyper-V. Yes, it might be the easiest one to handle (you don’t have to worry about policies) but at the end of the day in case you prod gets attacked / infiltrated, attackers could easily and without a firewall get access to your Veeam / DR systems without having too much trouble. This is to avoid.

 

My recommendation: When planning such an environment you should use a physical firewall with the proper sizing that can handle that kind of workload. You also have to make sure that you have all required fw rules in place (refer to the Veeam KBs to make sure to cover the policies properly).

Yes, that might be additional workload to the fw, but is necessary for ensuring security measures.

 

Most state-of-the-art firewalls already have a throughput of at least 10 Gbit. Some customers even use dedicated firewalls only for that scenario.

 

Tip 1: Make sure to disable IPS and IDS for that kind of traffic since it would add a massive workload to the fw.

Tip 2: In case you have a dedicated storage make sure to consider using an Off-Host Proxy to retrieve data directly from the storage to avoid having Hyper-V servers to handle that traffic (On-Host Proxy).

 

Hope that gives a good first impression.

 

Best

Lukas


Stabz
Forum|alt.badge.img+8
  • Author
  • Veeam Legend
  • December 29, 2025

Hey ​@lukas.k 

I agree with you on this approach; as I usually mention, I prefer to separate my components into different VLANs. Unfortunately, today I still encounter relatively old network infrastructures.

The Off-Host solution can be good, but it adds an additional component to maintain, which comes with a cost, and the storage must be compatible.

Too bad we don’t have an equivalent to the virtual proxy available on VMware :D


Forum|alt.badge.img+2
  • New Here
  • January 9, 2026

Hi,

 

We had a similar situation last year. We started putting the backup storage into it’s own VLAN separated by a firewall. Yes the firewall could handle the backup traffic but it did put considerable load on it and yes the backups were a little slower.

 

For the Hyper-V hosts specifically we ended up adding another VLAN interface on the host connected to the backup VLAN. Yes it’s not ideal but it’s a start. Our bottleneck now is the source storage. Make sure your backup repositories are secured. Our Hyper-V hosts are also on their own separate domain, not on the main management domain and definitely not on the office domain.

 

I have found Hyper-V storage snapshots using an off-host proxy to work ok but it’s not great. It does not work like VMware at all. The VM snapshot remains open the entire time the backup is running which is not great. I really wish there was a “hotadd” option for Hyper-V.

 

Thanks
Kevin