Question

Veeam Hardened Repository multihoming

  • 13 February 2024
  • 2 comments
  • 77 views

Userlevel 2

Hello Community,

as part of a hardware refresh project project for a new customer, we are planning on creating a separate /24 VLAN subnet behind the firewall in order to better segment and protect the Production vSphere environment, the new soon to come vSphere DR cluster as well as the Veeam backup fabric as they currently reside in the same flat VLAN and IP subnet where all clients and servers reside (as depicted below):

==================================================

==================================================

Currently the Veeam Backup server/primary backup repository is a physical Windows server running Windows Server 2019 with 50TB of local storage. We are planning on moving the Veeam Backup Server to a new dedicated VM hosted on the soon to come vSphere DR cluster and, at the same time, repurpose the primary backup repository hardware into a Veeam Hardened Repository.

The main concern here are the 60 or so client computers with the Veeam Agent for Windows. More specifically, in the current scenario all network traffic between the source Veeam Data Movers running on the backup agents side and the target Veeam Data Movers running on the primary backup repository side are local in the same /23 subnet depicted above. After placing the Veeam backup fabric in the separate /24 VLAN subnet behind the firewall in order to better segment and protect it, all network traffic between the source and the target Veeam Data Movers will need to traverse the firewall and this is something we want to avoid at all costs for several reasons.

It looks that for some reason the external network and security consultants are reluctant to use VLAN routing and create proper ACLs on the core switch, so we are trying to find a way to work around this.

Although not recommended (especially from a security perspective), technically we could multihome the soon to come Veeam Hardened Repository to allow the source Veeam Data Movers running on the backup agents side to locally connect from the /23 subnet and, at the same time, allow the source Veeam Data Movers running on the VMware backup proxies side to locally connect from the new /24 subnet.

Could you please tell me if this is something that can be achieved, for example by properly configuring Network Traffic Rules or by splitting name resolution on the two subnets ?

It would be great if someone could kindly advice me on this matter.

Thanks and Regards,

Massimiliano


2 comments

Userlevel 7
Badge +20

Based on what you have described, for what you want the setup to look like, there is no way to not add FW rules to allow traffic from the Agents to the VBR/Repo servers in the new /24 VLAN.  The only way to not traverse the FW/VLAN would be to have the Agents sent directly to Object Storage or something like that.  Unless you are a Service Provider and can deploy Veeam Cloud Connect that would be another way around this as it gives access to repositories through Veeam and Gateway servers.

I don’t see any magical way around this, and you will need to open the ports noted here - Ports - Veeam Agent Management Guide

Userlevel 7
Badge +8

Hi Massimiliano.

I will try to give you some ideas, but this looks more like an architectual decision / issue.

First of all, I rad that you want to implement a VHR Veeam Hardened Repo in the Vmware DR Cluster, right? if the VHR is a vm, its vulnerable of Datastore encryption in case of attack and your hosts get hit.

Secondly, for the network split, regarding the vms, I did create in the past in my vsphere environment a dedicated vlan/network for backup traffic, presented to the Proxies, the VBR server and the ESXi Hosts, and then configuring the prefered network into the VBR server, adding also the virtual ethernet adapter to it.

Third, for the workstations, if they need to reach the VBR server over the network, and you want to skip the firewall to process that traffic, you are right, at the core switches, you may need to configure some ACL / traffic rules, when a client wants to reach the VBR server IP address, it should be router over the core switch and not the FW, but you are exposing that network to the workstations skipping the FW control, up to you what better works in your case.

final advice, test! test! test!

Run a config, test it and see if it suits your needs, if not, plan B, plan C, etc.

hopefuly it helps.

cheers.

Luis.

Comment