Skip to main content

Since we added Linux Repositories to our Veeam Backup environment in my post from last week, why not add some Linux Proxies as well? I will show you how in this post. And trust me…this one will be far less painful. But adding Repositories wasn’t so bad, was it? 😏
 

System Requirements

The first thing you need to do is check what OS's, hardware, and software are supported. Supported Linux OS's are exactly the same as with Repositories and are listed below:

Linux Proxy Requirements

As far as hardware goes, your server should have a minimum 2-core CPU, plus 1 core per 2 concurrent tasks (take note, this is a peformance enhancement since v12, up from 1 core per 1 concurrent task); and minimum 2GB RAM, plus 500MB per concurrent task. But for best performance and to run multiple Jobs and tasks, you should have much more Cores and RAM than the minimum. As with the Veeam Repository, look to the Veeam Best Practice Guide for actual Proxy sizing guidelines.

The remaining requirements depend on the Backup Transport Mode you plan to use – Direct Storage (DirectSAN or DirectNFS), Virtual Appliance (hotadd), or Network (nbd).

  • For Direct Storage mode, the Proxy should have direct access to the storage the source (production) VMs are on. The open-iscsi and multipathing packages also need to be installed (both are pre-installed by default on standard Ubuntu installs). If using DirectNFS, the NFS Client, nfs-common (Debian), and nfs-utils (RHEL) packages need to be installed.
  • If you choose Virtual Appliance mode, then the Proxy VM(s) must have access to the source VM disks it processes. Also, the VM Proxies must have VMware Tools installed (for vSphere Backups) as well as have a SCSI 0:X Controller.
    Note: Linux Proxies cannot be used as Guest Interaction Proxies, and VM Proxies do not support the VM Copy scenario.
  • If using Backup from Storage Snapshots, the Proxy does not need access to the Volume, but rather its snapshot or clone. To use BfSS, the Proxy Transport mode should be configured for either Automatic or Direct Storage, and the Backup Job > Storage section > Advanced button > Integration tab, should have the box for Backup From Storage Snapshots enabled.
    Enable BfSS

     

You can reference additional requirements and limitations in the User Guide here and here.

As with Linux-based Repositories, SSH and the Bash Shell are also required. You can check if the user you use to configure your Linux server is using Bash by looking in the passwd file, final item in the output:
 

Check User Shell

Veeam installs two services used by Proxies – the Veeam Installer Service and the Veeam Data Mover component. Although the Veeam Data Mover component can be persistent or non-persistent, for Linux Backup Proxies the Data Mover must be Persistent.
 

Linux Proxy Data Mover Requirement


Linux Installation

As I mentioned in my Repository post, I won't go through the Linux install. I gave a few suggestions regarding gaining install experience in that post.

 

Linux Configuration

I will be again configuring my Linux server to connect to my storage using the iSCSI protocol. If you use another protocol, make sure to use commands for the protocol you're using, as well as the Linux distribution (Debian, Redhat, other).

After installing your OS, perform an update and upgrade to the software and packages:

Linux OS Update & Upgrade

To save some reading time (yay! 😂), I won't provide the details of the remaining Linux configuration steps as they are identical as what I provided in my Repository post. I'll just provide a high-level list here. For a reminder of each step's details, please refer to the Implementing Linux Repository post link above.

  1. Change your IQN to a more relevant name:
    IQN example: iqn.2023-12.com.domain.hostname:initiator01
  2. If you're using a SAN, and your storage vendor has specific iscsi.conf and multipath.conf configurations, make those changes, then restart the iscsid and multipathd services to apply the change
  3. Though not required, it is recommnded to change your adapter names based on function:
    Adapter Alias/Name Change

     
  4. After changing adapter names, configure iscsiadm iface for each storage adapter used for multipathing, then restart the iscsid service. You may need to restart your server for the changes to take effect
  5. If using Direct Storage or Backup From Storage Snapshots, log onto your production storage array and configure your production Volumes based on the Transport method you choose to use. I provide further details on each method below

After everything is configured, you're now ready to finalize your Proxy. I'll finish this configuration section based on the Backup Transport method you're implementing…

Virtual Appliance

This is the easiest method to implement. Aside from making sure your Proxy VMs have access to source disks of your production VMs you're backing up, all you need on your Linux system is to make sure the Linux user account you use to connect to your Veeam server with is using the Bash shell, and have SSH enabled. Configuration steps 1-4 above are not needed. Also, if you use a mixed form of Proxies in your enviornment, physical and virtual, as I do, I generally configure my VM Proxies with 8 Cores and 16GB RAM. If you use all virtual Proxies, then make sure to size all your Proxy VMs according to Veeam Sizing Best Practices. I provided the BP link above.

Backup From Storage Snapshot (BfSS)

If you're using BfSS, log onto your storage array and configure access to your production Volumes with the IQN name you set earlier in the intiatorname.iscsi file. As mentioned in the Requirements section, for BfSS, the only access the Proxy server requires is access to snapshots or clones, not the Volumes. For my vendor array, the setting I use for this is to configure "Snapshot only" access to the production Volumes.

The last thing needed is to connect the Proxy server to the storage array (target):

sudo iscsiadm -m discovery -t sendtargets -p Discovery-IP-of-Array

Direct Storage (DirectSAN)

This Transport method, I just recently learned, is a bit "clunky" with Linux. As I was researching the User Guide, and filling in the gaps with further details the Guide doesn't provide, by reading other posts on the Web, I came across this Forums post , which states at least one caveat of why not to use this method with Linux. Since v11a, and as noted in the v11a Release Notes – "Linux-based Backup Proxies configured with multipath does not work in DirectSAN". There are a couple comments in the post from users who did occasionally have multipathing work. But, with it being so unstable, it's probably best to avoid this method for now until Veeam resolves the multipathing issue.
 

For article completeness, and for when Veeam does resolve the multipathing issue, I will still share the configurations needed to configure DirectSAN. On your your production storage, each Volume needs ACL access with the Linux server's IQN, as is done with BfSS. But, for each LUN, "Volume only" needs configured as well. No Snapshot access is required.

On your Linux server, perform a target discovery command, as is done with BfSS. After, you then need to do a target "login" operation:

sudo iscsiadm -m node -l

Veeam Server

After you've finished configuring your Linux server for the Transport method you are using, you then need to add your Linux server as a managed server in Veeam, then go through the Add Proxy > VMware Backup Proxy process. Once you perform those 2 steps, you can then either manually assign this specific Proxy to your Jobs as needed, or allow Veeam to choose for you via the Automatic setting.

 

Conclusion

And that's all there is to implementing Linux Proxies into your Veeam Backup environment. Told you it wasn't so bad 😉 You now have a fully Linux-integrated Veeam Backup & Replication environment!..well, at least fully-integrated Linux Repositories and Proxies. 🙂

 

I see the article uses a direct ISCSI link from the Linux Proxy to the repository. I think we have a complication that I don’t know if anyone else has encountered. When our system was set up it was fully virtualised with the VBR as a VM but with the ISCSI connection to the storage from within the VBR server OS via an ISCSI initiator. That means the repository is seen as a mapped drive inside the VBR server. How exactly does that work for a linux proxy using Hot Add with BfSS enabled? It seems to work as all our backups are working but not sure how it is managing a connection and snapshot or if it is failing over to another method.


I see the article uses a direct ISCSI link from the Linux Proxy to the repository. I think we have a complication that I don’t know if anyone else has encountered. When our system was set up it was fully virtualised with the VBR as a VM but with the ISCSI connection to the storage from within the VBR server OS via an ISCSI initiator. That means the repository is seen as a mapped drive inside the VBR server. How exactly does that work for a linux proxy using Hot Add with BfSS enabled? It seems to work as all our backups are working but not sure how it is managing a connection and snapshot or if it is failing over to another method.

It would depend on the Linux version but with Ubuntu which we use it would use the open-iscsi client to connect - https://documentation.ubuntu.com/server/explanation/storage/iscsi-initiator-or-client/index.html

We don’t use iSCSI as we are an FC shop.


I see the article uses a direct ISCSI link from the Linux Proxy to the repository. I think we have a complication that I don’t know if anyone else has encountered. When our system was set up it was fully virtualised with the VBR as a VM but with the ISCSI connection to the storage from within the VBR server OS via an ISCSI initiator. That means the repository is seen as a mapped drive inside the VBR server. How exactly does that work for a linux proxy using Hot Add with BfSS enabled? It seems to work as all our backups are working but not sure how it is managing a connection and snapshot or if it is failing over to another method.

It would depend on the Linux version but with Ubuntu which we use it would use the open-iscsi client to connect - https://documentation.ubuntu.com/server/explanation/storage/iscsi-initiator-or-client/index.html

We don’t use iSCSI as we are an FC shop.

Thanks for the prompt reply Chris, sadly we are on Microsoft with iSCSI and I suspect that has implications for performance. I suspect the VBR server is going to have to handle all the throughput from and to the virtual Ubuntu proxies.

I am seeing the proxies as the primary bottle neck and wondering if I need to do anything about their specifications (8GB, 2vCPU, single task) and wondering if this is genuine i.e. the proxies themselves struggling or if it is really down to the low level I/O between VBR and Linux Proxy due to the iSCSI.

I have already tweaked the Network Buffers on both VBR and Linux Proxies which has helped but I am wondering about adding more vCPU to the proxies. As I don’t run more than a single task on the proxies I would have thought 2vCPU would be adequate.

I have also put the entire backup network on a dedicated subnet using a custom ESXi TCP/IP Stack to avoid the hidden management network bottlenecks in ESXi. I may be trying to gild the lily as the network is only 1 Gbe.

We have configured static LAG so there is a pair of the 1 Gbe available between all ESXi hosts but the way routing hash based on IP works it means only one is active at any time so it doesn’t help bandwidth between hosts. That said there is an exception to the rule it works wonders for True NAS on the iSCSI repository as that does use both (not figured out how it does that but monitoring during backups shows writes use one and reads the other which does double bandwidth and is great).

Any suggestions on best practice to configure the BfSS options in this scenario would be gratefully accepted i.e. should I enable any of the failovers (I haven’t at present and it seems to be working) or should I not use BfSS at all as I am using iSCSI. As I said the underlying I/O path is a mystery to me and the fact BfSS may be working at all seems hard to understand to me.


I myself cannot give advice on Linux stuff but there was a great post on the community here about Linux Proxies by Shane - Implementing Linux Veeam Proxies | Veeam Community Resource Hub

Check that out and I am sure there are more here that you can find by searching.  Or maybe search the forums as well - https://forums.veeam.com

 


Hi Chris,

Appreciate the links, I have read the first one already and it doesn’t answer the question that is sticking in my mind at the moment. It isn’t specifically a Linux question either. It is more to do with what happens to a Virtual VBR server that is serving as the iSCSI initiator to the external repository.

Assuming we have an iSCSI connection directly from inside a Windows VM acting as the VBR server to an external NAS using SCSI how does Veeam mount the Hot Add disk to the Linux Proxy? Is this an NFS share to the iSCSI mapped extent or is some other sourcery employed. I am assuming here that Veeam will not instruct the Linux proxy to mount the SCSI share directly but will proxy the iSCSI extent to the linux proxy somehow (possibly an NFS share on the mapped drive) and the VBR will create the storage snapshot for the BfSS itself. 

Am I barking up the wrong tree? 


Hi Chris,

Appreciate the links, I have read the first one already and it doesn’t answer the question that is sticking in my mind at the moment. It isn’t specifically a Linux question either. It is more to do with what happens to a Virtual VBR server that is serving as the iSCSI initiator to the external repository.

Assuming we have an iSCSI connection directly from inside a Windows VM acting as the VBR server to an external NAS using SCSI how does Veeam mount the Hot Add disk to the Linux Proxy? Is this an NFS share to the iSCSI mapped extent or is some other sourcery employed. I am assuming here that Veeam will not instruct the Linux proxy to mount the SCSI share directly but will proxy the iSCSI extent to the linux proxy somehow (possibly an NFS share on the mapped drive) and the VBR will create the storage snapshot for the BfSS itself. 

Am I barking up the wrong tree? 

The Proxy would use the Hot-Add to read the disk information for backup then send that to the VBR server which is the repository server with the iSCSI repo attached.  So traffic goes VBR > Proxy > Back to VBR for Repo.


Assuming we have an iSCSI connection directly from inside a Windows VM acting as the VBR server to an external NAS using SCSI how does Veeam mount the Hot Add disk to the Linux Proxy? Is this an NFS share to the iSCSI mapped extent or is some other sourcery employed. I am assuming here that Veeam will not instruct the Linux proxy to mount the SCSI share directly but will proxy the iSCSI extent to the linux proxy somehow (possibly an NFS share on the mapped drive) and the VBR will create the storage snapshot for the BfSS itself. 

Based on this page, it’s not the Veeam server that needs to be connected to the external NAS, it’s the proxy that should be. 

It needs to be visible but not initialized by the OS of the proxy server. I would suggest reaching out to Veeam support to confirm the best way to safely set this up.


See the following link for the backup process - VMware Backups | Veeam Backup & Replication Best Practice Guide


Based on this page, it’s not the Veeam server that needs to be connected to the external NAS, it’s the proxy that should be. 

It needs to be visible but not initialized by the OS of the proxy server. I would suggest reaching out to Veeam support to confirm the best way to safely set this up.

Actually I may have misunderstood, you’re not trying to use direct SAN to backup vms stored on a NAS, you’ve connected the NAS as a iSCSI repo to the VBR server. 

The traffic would flow from the proxy reading the source VM, to the VBR server, and onto the repository via iSCSI.


Hi ​@NickDaGeek -

First off...your post really should be its own “Discussion Boards” question here on the Hub. 

This (my) post here you’re commenting on deals specifically how to create a physical Linux Proxy configured for BfSS using iSCSI connections so the Proxy is able to “see” the source (VM) storage to use that method. Yes...it can also be used with VM Proxies, but I think there still may be some traversing the virtual stack which may hinder perf a bit. As I also shared in the post, I don’t recommend DirectSAN for the reason stated, but this config is also req’d for that method if you do use it. 

I’m not entirely sure what specific question you’re asking here though. You’re jumping around a bit 😊  Are you wondering where your perf issue is coming from? Are you concerned about VBR sizing with using it as a Repo? I recommend just creating a new post with goals and/or questions you have. Share as much about your backup environment as you can and we can try to help further. But keep in mind, Support is always your “go to” folks to get help.

Regarding sizing, take note of the Proxy sizing guideline link I share in this post. That does need to be considered. If you’re using a VM for Proxies, you can’t have more than 8 vCPUs per Proxy (beit Linux of Windows). Size your Proxy for tasks and processing according to sizing guidelines in the Guide. If you’re your doubling your VBR server to also have the Repo role, you need to size your VBR server for both the VBR AND Repo roles. 

Best. 


Hi ​@Chris.Childerhose , thanks for confirming the path. I had a chat with Veeam Tech during a webinar and they confirm the mount path and data traffic routing for Hot Add in this scenario as exactly what you described.

Hi ​@Tommy O'Shea as you can see from above great minds think alike and it is as Chris identified; the VBR is proxying the external iSCSI repo for the linux proxies.

Hi ​@coolsport00, thanks for the confirmation that you are talking about a physical not virtual proxy in your article. That makes sense to me now.

Your comment about traversing the virtual stack affecting performance is very well observed. Going full Virtual changes the network topology considerably. As we are configured here, with iSCSI to the VBR not the host, there is both internal and external networking involved. Internal network between VBR and the virtual proxy on the same physical host, and external network between VBR and the Repository. Also extenal network to proxies on other physical hosts and the VBR when working across hosts.

Have raised the performance and configuration with Veeam Tech as a support ticket. They confirm, as Chris and Tommy said, that VBR is proxying the iSCSI repository to the linux proxies. Think we are saturating network on VBR and its links to the external network. VBR during jobs is a two way traffic proxy between the Linux proxies and the External Repo.

My gut reaction is also that we are not able to use BfSS in this scenario so the tickbox is being ignored by Veeam and it is directly mounting the VMDK on the repo via VBR. This might explain logs showing other jobs finding the resource locked.

To one and all: thank you for your time and suggestions / information. I have learned a lot. I now realise the design of network topology and placement of proxies and repositories and their connection methods is fundamental to performance; and a lot more complicated than my predecessors realiesed.

Thanks again 😊


Not a problem glad to have helped somewhat.


@NickDaGeek - no problem. And, I’m not sure you can even use BfSS with TrueNAS. I don’t think that’s even supported. Your only route would be DirectSAN or hotadd. HA isn’t bad at all, with good network throughput. But it looks like you don’t have that necessarily (only 1Gb?)….so you’d be limited. But DirectSAN even wouldn’t be that great either...again cuz of your small networking throughput size.

Best.