Skip to main content

Implementing Linux Veeam Proxies


coolsport00
Forum|alt.badge.img+20

Since we added Linux Repositories to our Veeam Backup environment in my post from last week, why not add some Linux Proxies as well? I will show you how in this post. And trust me…this one will be far less painless. But adding Repositories wasn’t so bad, was it? 😏
 

System Requirements

The first thing you need to do is check what OS's, hardware, and software are supported. Supported Linux OS's are exactly the same as with Repositories and are listed below:

Linux Proxy Requirements

As far as hardware goes, your server should have a minimum 2-core CPU, plus 1 core per 2 concurrent tasks (take note, this is a peformance enhancement since v12, up from 1 core per 1 concurrent task); and minimum 2GB RAM, plus 500MB per concurrent task. But for best performance and to run multiple Jobs and tasks, you should have much more Cores and RAM than the minimum. As with the Veeam Repository, look to the Veeam Best Practice Guide for actual Proxy sizing guidelines.

The remaining requirements depend on the Backup Transport Mode you plan to use – Direct Storage (DirectSAN or DirectNFS), Virtual Appliance (hotadd), or Network (nbd).

  • For Direct Storage mode, the Proxy should have direct access to the storage the source (production) VMs are on. The open-iscsi and multipathing packages also need to be installed (both are pre-installed by default on standard Ubuntu installs). If using DirectNFS, the NFS Client, nfs-common (Debian), and nfs-utils (RHEL) packages need to be installed.
  • If you choose Virtual Appliance mode, then the Proxy VM(s) must have access to the source VM disks it processes. Also, the VM Proxies must have VMware Tools installed (for vSphere Backups) as well as have a SCSI 0:X Controller.
    Note: Linux Proxies cannot be used as Guest Interaction Proxies, and VM Proxies do not support the VM Copy scenario.
  • If using Backup from Storage Snapshots, the Proxy does not need access to the Volume, but rather its snapshot or clone. To use BfSS, the Proxy Transport mode should be configured for either Automatic or Direct Storage, and the Backup Job > Storage section > Advanced button > Integration tab, should have the box for Backup From Storage Snapshots enabled.
    Enable BfSS

     

You can reference additional requirements and limitations in the User Guide here and here.

As with Linux-based Repositories, SSH and the Bash Shell are also required. You can check if the user you use to configure your Linux server is using Bash by looking in the passwd file, final item in the output:
 

Check User Shell

Veeam installs two services used by Proxies – the Veeam Installer Service and the Veeam Data Mover component. Although the Veeam Data Mover component can be persistent or non-persistent, for Linux Backup Proxies the Data Mover must be Persistent.
 

Linux Proxy Data Mover Requirement


Linux Installation

As I mentioned in my Repository post, I won't go through the Linux install. I gave a few suggestions regarding gaining install experience in that post.

 

Linux Configuration

I will be again configuring my Linux server to connect to my storage using the iSCSI protocol. If you use another protocol, make sure to use commands for the protocol you're using, as well as the Linux distribution (Debian, Redhat, other).

After installing your OS, perform an update and upgrade to the software and packages:

Linux OS Update & Upgrade

To save some reading time (yay! 😂), I won't provide the details of the remaining Linux configuration steps as they are identical as what I provided in my Repository post. I'll just provide a high-level list here. For a reminder of each step's details, please refer to the Implementing Linux Repository post link above.

  1. Change your IQN to a more relevant name:
    IQN example: iqn.2023-12.com.domain.hostname:initiator01
  2. If you're using a SAN, and your storage vendor has specific iscsi.conf and multipath.conf configurations, make those changes, then restart the iscsid and multipathd services to apply the change
  3. Though not required, it is recommnded to change your adapter names based on function:
    Adapter Alias/Name Change

     
  4. After changing adapter names, configure iscsiadm iface for each storage adapter used for multipathing, then restart the iscsid service. You may need to restart your server for the changes to take effect
  5. If using Direct Storage or Backup From Storage Snapshots, log onto your production storage array and configure your production Volumes based on the Transport method you choose to use. I provide further details on each method below

After everything is configured, you're now ready to finalize your Proxy. I'll finish this configuration section based on the Backup Transport method you're implementing…

Virtual Appliance

This is the easiest method to implement. Aside from making sure your Proxy VMs have access to source disks of your production VMs you're backing up, all you need on your Linux system is to make sure the Linux user account you use to connect to your Veeam server with is using the Bash shell, and have SSH enabled. Configuration steps 1-4 above are not needed. Also, if you use a mixed form of Proxies in your enviornment, physical and virtual, as I do, I generally configure my VM Proxies with 8 Cores and 16GB RAM. If you use all virtual Proxies, then make sure to size all your Proxy VMs according to Veeam Sizing Best Practices. I provided the BP link above.

Backup From Storage Snapshot (BfSS)

If you're using BfSS, log onto your storage array and configure access to your production Volumes with the IQN name you set earlier in the intiatorname.iscsi file. As mentioned in the Requirements section, for BfSS, the only access the Proxy server requires is access to snapshots or clones, not the Volumes. For my vendor array, the setting I use for this is to configure "Snapshot only" access to the production Volumes.

The last thing needed is to connect the Proxy server to the storage array (target):

sudo iscsiadm -m discovery -t sendtargets -p Discovery-IP-of-Array

Direct Storage (DirectSAN)

This Transport method, I just recently learned, is a bit "clunky" with Linux. As I was researching the User Guide, and filling in the gaps with further details the Guide doesn't provide, by reading other posts on the Web, I came across this Forums post , which states at least one caveat of why not to use this method with Linux. Since v11a, and as noted in the v11a Release Notes – "Linux-based Backup Proxies configured with multipath does not work in DirectSAN". There are a couple comments in the post from users who did occasionally have multipathing work. But, with it being so unstable, it's probably best to avoid this method for now until Veeam resolves the multipathing issue.
 

For article completeness, and for when Veeam does resolve the multipathing issue, I will still share the configurations needed to configure DirectSAN. On your your production storage, each Volume needs ACL access with the Linux server's IQN, as is done with BfSS. But, for each LUN, "Volume only" needs configured as well. No Snapshot access is required.

On your Linux server, perform a target discovery command, as is done with BfSS. After, you then need to do a target "login" operation:

sudo iscsiadm -m node -l

Veeam Server

After you've finished configuring your Linux server for the Transport method you are using, you then need to add your Linux server as a managed server in Veeam, then go through the Add Proxy > VMware Backup Proxy process. Once you perform those 2 steps, you can then either manually assign this specific Proxy to your Jobs as needed, or allow Veeam to choose for you via the Automatic setting.

 

Conclusion

And that's all there is to implementing Linux Proxies into your Veeam Backup environment. Told you it wasn't so bad 😉 You now have a fully Linux-integrated Veeam Backup & Replication environment!..well, at least fully-integrated Linux Repositories and Proxies. 🙂

 

34 comments

Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8459 comments
  • December 20, 2023

Great article Shane.  Just something to add to this in that you need to disable multipathing as you could have issues with disks getting attached to the Proxy.  As per - KB4460: Failed to get guest OS path for newly attached disk there are 2 new disks with uuid (veeam.com)


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • December 20, 2023

Great additional resource @Chris.Childerhose . Thank you for sharing bud!


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8459 comments
  • December 20, 2023

Not a problem.  I ran in to this one already, so it is an important step. 😁


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • December 20, 2023

I haven’t run into this error/issue yet, so I wonder when or how this occurs. If it’s “common”, I wonder why this info hasn’t been more widely disseminated. It’s a big deal IMO. I mean..if you can’t use multipathing, then what’s the point in using Linux Proxies...well, at least for use with BfSS/Direct Storage?


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8459 comments
  • December 20, 2023
coolsport00 wrote:

I haven’t run into this error/issue yet, so I wonder when or how this occurs. If it’s “common”, I wonder why this info hasn’t been more widely disseminated. It’s a big deal IMO. I mean..if you can’t use multipathing, then what’s the point in using Linux Proxies...well, at least for use with BfSS/Direct Storage?

Yeah, it may be different in that regard.  We ran it to it with HotAdd mode which is why it is recommended to be turned off.


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • December 20, 2023

I wanna say I've seen this recommendation somewhere but don't recall where. For hotadd this makes sense. But the KB doesn't specify just for hotadd iirc? 


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • December 20, 2023

Just rechecked...doesn't specify a transport method. 


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8459 comments
  • December 20, 2023

Yeah, it may not apply to all cases or modes.  I just saw this on our side when moving to Linux Proxies, so we implemented it to fix the underlying problem.


Moustafa_Hindawi
Forum|alt.badge.img+6

Awesome information, thank you @coolsport00 


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • December 23, 2023

Appreciate it Moustafa. 


dloseke
Forum|alt.badge.img+8
  • Veeam Vanguard
  • 1447 comments
  • December 27, 2023

Another great article...bookmarked this one as well.  Thanks for posting these Shane...they should be instrumental for me down the road!


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • December 27, 2023

Sure thing. Hope they help you out. 


Scott
Forum|alt.badge.img+9
  • Veeam Legend
  • 997 comments
  • January 3, 2024

This will help me when I implement them. Did you find any performance difference vs a Windows proxy? 


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • January 3, 2024

Glad to help, Scott.

Hmm...not really. My backups run much quicker, but that’s due to the transport mode I use now → Fwd vs FFwd, and having less synthetic operations, & using Fast Clone. Read/write speeds are about the same though. 


  • New Here
  • 2 comments
  • May 13, 2024

i can’t get this for the life of me to work.  i always get direct nfs connection not available.  i do have a ticket open and basically it appears it just hangs on nfs sessions.  i can mount manually just fine. 


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8459 comments
  • May 13, 2024
kyahwilde wrote:

i can’t get this for the life of me to work.  i always get direct nfs connection not available.  i do have a ticket open and basically it appears it just hangs on nfs sessions.  i can mount manually just fine. 

I would continue to work with Support on this one to get it resolved.


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • May 13, 2024

@kyahwilde - keep in mind, this post is for iSCSI connection. I haven't implemented Veeam Proxies connected to NFS. Have you had a look at the NFS area in the User Guide for requirements? 


  • New Here
  • 2 comments
  • May 13, 2024

yes.   goot point though :).  


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • May 13, 2024
kyahwilde wrote:

yes.   goot point though :).  

Ok. Your best bet is to see what Support has to say at this point. 

Best.


I have a strange behaviour with Linux proxies. RH proxies cause a stun on some vms (example object with high I/O). CentOS have no problem. Do you have an idea?


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • January 29, 2025

Hi ​@maurizio.rosa -

Welcome to the Community. Can you explain what you mean by “stun”? What specific behavior are you seeing during a backup? Guest OS within the VM being backed up is frozen? Regardless, I have personally not heard of this behavior with Veeam for any Linux version. I did find this issue with using vVols:

https://www.veeam.com/kb3055

Are you having issue where you’re using Linux VM Proxies and Veeam backup snapshots are occasionally stuck on the VM? I have this happen in my environment as well with Ubuntu. Not really much to do about that. Veeam has a KB on this behavior:

https://www.veeam.com/kb1775

At the very least, it’s probably best to open a case with Veeam Support to troubleshoot what’s going on.

Best.


Hi ​@coolsport00, I’ll explain the problem.

For “stun” I mean that during snapshot, consolidation and snapshot delete, the vm under backup freeze. This happen on machines with MSSQL db or with Forti fw and this cause some problems.

The infrastructure is with VBR 12.3. Using CentOS proxies nothing happen. Using RH9 proxies the problem happen really often.

Already opened a case with no result (for Veeam is not a problem of VBR).


AndrePulia
Forum|alt.badge.img+6
  • Veeam Legend, Veeam Vanguard
  • 333 comments
  • January 29, 2025

@coolsport00    Very good article,  as a contribution, once a client included about 100 VMs in a Job, several VMs were located in several LUNs, this ended up causing an overload in the storage system, as several LUNs suffered the storage snapshot simultaneously. I think it is interesting to map which LUN the VMs reside in to avoid this type of situation.
And also in some storage arrays, a specific area can be defined for snoshpots and this area can overflow depending on the amount of I/O made against the LUNS.


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4133 comments
  • January 29, 2025

Ok ​@maurizio.rosa . Yeah...I’ve not heard of this behavior personally. I find it odd it only happens with RH. A Linux kernel is a Linux kernel under the hood. You can maybe search for this behavior on the Forums as well to see if anyone has had a similar issue and what they did to resolve.

And, thanks Andre!


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8459 comments
  • January 29, 2025
maurizio.rosa wrote:

Hi ​@coolsport00, I’ll explain the problem.

For “stun” I mean that during snapshot, consolidation and snapshot delete, the vm under backup freeze. This happen on machines with MSSQL db or with Forti fw and this cause some problems.

The infrastructure is with VBR 12.3. Using CentOS proxies nothing happen. Using RH9 proxies the problem happen really often.

Already opened a case with no result (for Veeam is not a problem of VBR).

What type of storage are you using and is it connected via FC or iSCSI or other?  There are usually tweaks dependent on the storage type and connectivity that you can do.


Comment