Skip to main content

Implementing Linux Veeam Proxies


Show first post

38 comments

  • New Here
  • 4 comments
  • March 24, 2025

I see the article uses a direct ISCSI link from the Linux Proxy to the repository. I think we have a complication that I don’t know if anyone else has encountered. When our system was set up it was fully virtualised with the VBR as a VM but with the ISCSI connection to the storage from within the VBR server OS via an ISCSI initiator. That means the repository is seen as a mapped drive inside the VBR server. How exactly does that work for a linux proxy using Hot Add with BfSS enabled? It seems to work as all our backups are working but not sure how it is managing a connection and snapshot or if it is failing over to another method.


Chris.Childerhose
Forum|alt.badge.img+21
NickDaGeek wrote:

I see the article uses a direct ISCSI link from the Linux Proxy to the repository. I think we have a complication that I don’t know if anyone else has encountered. When our system was set up it was fully virtualised with the VBR as a VM but with the ISCSI connection to the storage from within the VBR server OS via an ISCSI initiator. That means the repository is seen as a mapped drive inside the VBR server. How exactly does that work for a linux proxy using Hot Add with BfSS enabled? It seems to work as all our backups are working but not sure how it is managing a connection and snapshot or if it is failing over to another method.

It would depend on the Linux version but with Ubuntu which we use it would use the open-iscsi client to connect - https://documentation.ubuntu.com/server/explanation/storage/iscsi-initiator-or-client/index.html

We don’t use iSCSI as we are an FC shop.


  • New Here
  • 4 comments
  • March 24, 2025
Chris.Childerhose wrote:
NickDaGeek wrote:

I see the article uses a direct ISCSI link from the Linux Proxy to the repository. I think we have a complication that I don’t know if anyone else has encountered. When our system was set up it was fully virtualised with the VBR as a VM but with the ISCSI connection to the storage from within the VBR server OS via an ISCSI initiator. That means the repository is seen as a mapped drive inside the VBR server. How exactly does that work for a linux proxy using Hot Add with BfSS enabled? It seems to work as all our backups are working but not sure how it is managing a connection and snapshot or if it is failing over to another method.

It would depend on the Linux version but with Ubuntu which we use it would use the open-iscsi client to connect - https://documentation.ubuntu.com/server/explanation/storage/iscsi-initiator-or-client/index.html

We don’t use iSCSI as we are an FC shop.

Thanks for the prompt reply Chris, sadly we are on Microsoft with iSCSI and I suspect that has implications for performance. I suspect the VBR server is going to have to handle all the throughput from and to the virtual Ubuntu proxies.

I am seeing the proxies as the primary bottle neck and wondering if I need to do anything about their specifications (8GB, 2vCPU, single task) and wondering if this is genuine i.e. the proxies themselves struggling or if it is really down to the low level I/O between VBR and Linux Proxy due to the iSCSI.

I have already tweaked the Network Buffers on both VBR and Linux Proxies which has helped but I am wondering about adding more vCPU to the proxies. As I don’t run more than a single task on the proxies I would have thought 2vCPU would be adequate.

I have also put the entire backup network on a dedicated subnet using a custom ESXi TCP/IP Stack to avoid the hidden management network bottlenecks in ESXi. I may be trying to gild the lily as the network is only 1 Gbe.

We have configured static LAG so there is a pair of the 1 Gbe available between all ESXi hosts but the way routing hash based on IP works it means only one is active at any time so it doesn’t help bandwidth between hosts. That said there is an exception to the rule it works wonders for True NAS on the iSCSI repository as that does use both (not figured out how it does that but monitoring during backups shows writes use one and reads the other which does double bandwidth and is great).

Any suggestions on best practice to configure the BfSS options in this scenario would be gratefully accepted i.e. should I enable any of the failovers (I haven’t at present and it seems to be working) or should I not use BfSS at all as I am using iSCSI. As I said the underlying I/O path is a mystery to me and the fact BfSS may be working at all seems hard to understand to me.


Chris.Childerhose
Forum|alt.badge.img+21

I myself cannot give advice on Linux stuff but there was a great post on the community here about Linux Proxies by Shane - Implementing Linux Veeam Proxies | Veeam Community Resource Hub

Check that out and I am sure there are more here that you can find by searching.  Or maybe search the forums as well - https://forums.veeam.com

 


  • New Here
  • 4 comments
  • March 24, 2025

Hi Chris,

Appreciate the links, I have read the first one already and it doesn’t answer the question that is sticking in my mind at the moment. It isn’t specifically a Linux question either. It is more to do with what happens to a Virtual VBR server that is serving as the iSCSI initiator to the external repository.

Assuming we have an iSCSI connection directly from inside a Windows VM acting as the VBR server to an external NAS using SCSI how does Veeam mount the Hot Add disk to the Linux Proxy? Is this an NFS share to the iSCSI mapped extent or is some other sourcery employed. I am assuming here that Veeam will not instruct the Linux proxy to mount the SCSI share directly but will proxy the iSCSI extent to the linux proxy somehow (possibly an NFS share on the mapped drive) and the VBR will create the storage snapshot for the BfSS itself. 

Am I barking up the wrong tree? 


Chris.Childerhose
Forum|alt.badge.img+21
NickDaGeek wrote:

Hi Chris,

Appreciate the links, I have read the first one already and it doesn’t answer the question that is sticking in my mind at the moment. It isn’t specifically a Linux question either. It is more to do with what happens to a Virtual VBR server that is serving as the iSCSI initiator to the external repository.

Assuming we have an iSCSI connection directly from inside a Windows VM acting as the VBR server to an external NAS using SCSI how does Veeam mount the Hot Add disk to the Linux Proxy? Is this an NFS share to the iSCSI mapped extent or is some other sourcery employed. I am assuming here that Veeam will not instruct the Linux proxy to mount the SCSI share directly but will proxy the iSCSI extent to the linux proxy somehow (possibly an NFS share on the mapped drive) and the VBR will create the storage snapshot for the BfSS itself. 

Am I barking up the wrong tree? 

The Proxy would use the Hot-Add to read the disk information for backup then send that to the VBR server which is the repository server with the iSCSI repo attached.  So traffic goes VBR > Proxy > Back to VBR for Repo.


Tommy O'Shea
Forum|alt.badge.img+3
  • Experienced User
  • 116 comments
  • March 24, 2025
NickDaGeek wrote:

Assuming we have an iSCSI connection directly from inside a Windows VM acting as the VBR server to an external NAS using SCSI how does Veeam mount the Hot Add disk to the Linux Proxy? Is this an NFS share to the iSCSI mapped extent or is some other sourcery employed. I am assuming here that Veeam will not instruct the Linux proxy to mount the SCSI share directly but will proxy the iSCSI extent to the linux proxy somehow (possibly an NFS share on the mapped drive) and the VBR will create the storage snapshot for the BfSS itself. 

Based on this page, it’s not the Veeam server that needs to be connected to the external NAS, it’s the proxy that should be. 

It needs to be visible but not initialized by the OS of the proxy server. I would suggest reaching out to Veeam support to confirm the best way to safely set this up.


Chris.Childerhose
Forum|alt.badge.img+21

See the following link for the backup process - VMware Backups | Veeam Backup & Replication Best Practice Guide


Tommy O'Shea
Forum|alt.badge.img+3
  • Experienced User
  • 116 comments
  • March 24, 2025
Tommy O'Shea wrote:

Based on this page, it’s not the Veeam server that needs to be connected to the external NAS, it’s the proxy that should be. 

It needs to be visible but not initialized by the OS of the proxy server. I would suggest reaching out to Veeam support to confirm the best way to safely set this up.

Actually I may have misunderstood, you’re not trying to use direct SAN to backup vms stored on a NAS, you’ve connected the NAS as a iSCSI repo to the VBR server. 

The traffic would flow from the proxy reading the source VM, to the VBR server, and onto the repository via iSCSI.


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4145 comments
  • March 30, 2025

Hi ​@NickDaGeek -

First off...your post really should be its own “Discussion Boards” question here on the Hub. 

This (my) post here you’re commenting on deals specifically how to create a physical Linux Proxy configured for BfSS using iSCSI connections so the Proxy is able to “see” the source (VM) storage to use that method. Yes...it can also be used with VM Proxies, but I think there still may be some traversing the virtual stack which may hinder perf a bit. As I also shared in the post, I don’t recommend DirectSAN for the reason stated, but this config is also req’d for that method if you do use it. 

I’m not entirely sure what specific question you’re asking here though. You’re jumping around a bit 😊  Are you wondering where your perf issue is coming from? Are you concerned about VBR sizing with using it as a Repo? I recommend just creating a new post with goals and/or questions you have. Share as much about your backup environment as you can and we can try to help further. But keep in mind, Support is always your “go to” folks to get help.

Regarding sizing, take note of the Proxy sizing guideline link I share in this post. That does need to be considered. If you’re using a VM for Proxies, you can’t have more than 8 vCPUs per Proxy (beit Linux of Windows). Size your Proxy for tasks and processing according to sizing guidelines in the Guide. If you’re your doubling your VBR server to also have the Repo role, you need to size your VBR server for both the VBR AND Repo roles. 

Best. 


  • New Here
  • 4 comments
  • March 31, 2025

Hi ​@Chris.Childerhose , thanks for confirming the path. I had a chat with Veeam Tech during a webinar and they confirm the mount path and data traffic routing for Hot Add in this scenario as exactly what you described.

Hi ​@Tommy O'Shea as you can see from above great minds think alike and it is as Chris identified; the VBR is proxying the external iSCSI repo for the linux proxies.

Hi ​@coolsport00, thanks for the confirmation that you are talking about a physical not virtual proxy in your article. That makes sense to me now.

Your comment about traversing the virtual stack affecting performance is very well observed. Going full Virtual changes the network topology considerably. As we are configured here, with iSCSI to the VBR not the host, there is both internal and external networking involved. Internal network between VBR and the virtual proxy on the same physical host, and external network between VBR and the Repository. Also extenal network to proxies on other physical hosts and the VBR when working across hosts.

Have raised the performance and configuration with Veeam Tech as a support ticket. They confirm, as Chris and Tommy said, that VBR is proxying the iSCSI repository to the linux proxies. Think we are saturating network on VBR and its links to the external network. VBR during jobs is a two way traffic proxy between the Linux proxies and the External Repo.

My gut reaction is also that we are not able to use BfSS in this scenario so the tickbox is being ignored by Veeam and it is directly mounting the VMDK on the repo via VBR. This might explain logs showing other jobs finding the resource locked.

To one and all: thank you for your time and suggestions / information. I have learned a lot. I now realise the design of network topology and placement of proxies and repositories and their connection methods is fundamental to performance; and a lot more complicated than my predecessors realiesed.

Thanks again 😊


Chris.Childerhose
Forum|alt.badge.img+21

Not a problem glad to have helped somewhat.


coolsport00
Forum|alt.badge.img+20
  • Author
  • Veeam Legend
  • 4145 comments
  • March 31, 2025

@NickDaGeek - no problem. And, I’m not sure you can even use BfSS with TrueNAS. I don’t think that’s even supported. Your only route would be DirectSAN or hotadd. HA isn’t bad at all, with good network throughput. But it looks like you don’t have that necessarily (only 1Gb?)….so you’d be limited. But DirectSAN even wouldn’t be that great either...again cuz of your small networking throughput size.

Best.