Transport Modes on a chart 👐


Userlevel 7
Badge +6
  • On the path to Greatness
  • 151 comments

Applicability and efficiency of each transport mode primarily depends on the type of datastore used by the source host — local or shared, and on the backup proxy type — physical or virtual. The table below shows recommendations for installing the backup proxy, depending on the storage type and desired transport mode.

:question:which one do you use? 

                                                                                                          

 

 


13 comments

Userlevel 7
Badge +13

Do you use Linux based proxies for virtual appliance mode, @JMeixner?

Up to now we have Windows based Proxies only.

A Linux based Proxy is the next thing I want to try. If this works well, it will be a lot cheaper and smaller than the Windows VMs.

More and more linux proxies will come with v11. I am quite sure we will push them for hardware proxies as well. Just because of XFS with immutability. Just linux tape proxies are left to come.

Userlevel 7
Badge +13

I use each of the transport modes. Depending on the environment, each of them could be faster - in terms of backup-duration - than the others.

Sometime I wish, transport mode is a setting on job layer, not on proxy layer. This would be very helpful to take the “fastest” mode for your workload.

Userlevel 7
Badge +17

@JMeixner, cheaper because you do not use data center licensing and pay Microsoft per server? or are you thinking of a service provider environment? I see some experiences shared by @HannesK on the R&D forums on which Linux distro to use: https://forums.veeam.com/post360698.html#p360698

Yes, I want to try an Ubuntu. So no Windows license costs.

Userlevel 7
Badge +6

Do you use Linux based proxies for virtual appliance mode, @JMeixner?

Up to now we have Windows based Proxies only.

A Linux based Proxy is the next thing I want to try. If this works well, it will be a lot cheaper and smaller than the Windows VMs.

In my opinion, the best benefit is that Linux Proxy don't use on VMware VDDK, Veeam written code… :nerd:

Userlevel 7
Badge +20

@haslund I did make considerations that it either could be dual roled or not with regards to proxy & repository roles as that would compound the issue further. However the main concerns were around NIC provisioning and then by extension the CPU impact on the hosts as we’ve both alluded to by pinning VMs to hosts and unbalancing clusters.

A fairly safe assumption to make here is that customers utilising Storage Snapshots have made a fairly significant investment in both storage and Veeam licensing so it’s interesting to here of the “all virtual” approach, definitely not a use-case I’d have considered before.

Anyone else have any further use-case examples to contribute?

Userlevel 7
Badge +14

I only use Direct Storage Access with physical servers and I’m interested why people would be using it with virtual machines? I’m not disputing it’s possible, just why anyone would want to use it?

I know quite a few customers using either NFS or iSCSI who do not want physical backup proxies, but they do want to leverage backup from storage snapshots.

It’s an interesting point, regarding the other bits I mentioned such as dedicated NICs etc, how do they overcome those challenges?

@MicoolPaul, re-reading your previous post it sounds like you are assuming physical proxy will also be a repository and if we use a virtual proxy it would also be a virtual repository? You could of course use virtual backup proxies and then target physical backup repositories.

Depending on your environment size, a simple VM to host rule could suffice (as this does not require vSphere DRS) but the more fancy way of course would be an anti-affinity DRS rule to just avoid the virtual proxies to be co-located on the same host(s).

iSCSI/NFS traffic from the virtual backup proxy, this definitely needs some consideration. Ideally, you would have a dedicated NIC for the virtual backup proxy to use, but if you have a lot of hosts this might not be feasible - instead, I often see Network IO Control (NIOC) used with share values.

Userlevel 7
Badge +14

Do you use Linux based proxies for virtual appliance mode, @JMeixner?

A Linux based Proxy is the next thing I want to try. If this works well, it will be a lot cheaper and smaller than the Windows VMs.

@JMeixner, cheaper because you do not use data center licensing and pay Microsoft per server? or are you thinking of a service provider environment? I see some experiences shared by @HannesK on the R&D forums on which Linux distro to use: https://forums.veeam.com/post360698.html#p360698

Userlevel 7
Badge +17

Do you use Linux based proxies for virtual appliance mode, @JMeixner?

Up to now we have Windows based Proxies only.

A Linux based Proxy is the next thing I want to try. If this works well, it will be a lot cheaper and smaller than the Windows VMs.

Userlevel 7
Badge +20

I only use Direct Storage Access with physical servers and I’m interested why people would be using it with virtual machines? I’m not disputing it’s possible, just why anyone would want to use it?

I know quite a few customers using either NFS or iSCSI who do not want physical backup proxies, but they do want to leverage backup from storage snapshots.

It’s an interesting point, regarding the other bits I mentioned such as dedicated NICs etc, how do they overcome those challenges?

Userlevel 7
Badge +14

I only use Direct Storage Access with physical servers and I’m interested why people would be using it with virtual machines? I’m not disputing it’s possible, just why anyone would want to use it?

I know quite a few customers using either NFS or iSCSI who do not want physical backup proxies, but they do want to leverage backup from storage snapshots.

Userlevel 7
Badge +20

I only use Direct Storage Access with physical servers and I’m interested why people would be using it with virtual machines? I’m not disputing it’s possible, just why anyone would want to use it?

It feels like a rabbit hole of inefficiency to head down.

 

Best practice is to dedicate the NICs of your ESXi for iSCSI, so we’d either end up using general VM traffic NICs for iSCSI which could easily saturate the bandwidth or greatly contribute to bandwidth saturation of the production VM network or we’d need to dedicate NICs to virtual iSCSI direct access.

Assuming we decided on dedicated NICs then to ensure that our cluster was capable of still performing vMotions for example on our proxies to ensure we have dedicated NICs on all hosts.

Once that’s out the way we then need to think about our repository, are we going to share our iSCSI NICs to interact with a physical iSCSI storage for the repository? We wouldn’t want the files within a VMDK after all and unless we can use a share capable of supporting block cloning that would rule out space efficiency as a candidate. (Assuming we dual homed the proxy and repository roles on this VM)

Finally we’ve got to think about the amount of concurrent tasks available and the vCPU and RAM demands of the proxies, unless we deployed the physical NICs to all hosts we’re impacting the cluster’s ability to load balance anyway and would have to pin VMs to hosts, this could impact backup and recoverability during maintenance windows. So we could have unbalanced hosts. If we did deploy multiple VMs with physical NIC access we’d still need rules to ensure the VMs aren’t co-existing on the same hosts otherwise they’d impede each others access to the iSCSI NICs by sharing them.

 

That’s my thought process on it so I’d be curious to hear of anyone using it and what their thoughts are on the above. I’d rather just use VA Mode or a physical host or dare I say it, networking mode (for simplicity).

Userlevel 7
Badge +14

We are using Network and Virtual Appliance Mode.

Do you use Linux based proxies for virtual appliance mode, @JMeixner?

Userlevel 7
Badge +17

We are using Network and Virtual Appliance Mode.

All of our VMWare Clusters are using vSAN, so the Direct Storage Access Mode is no option….

Comment