Solved

Veeam & Netapp integration / iSCSI integration /


Userlevel 4

Hi,

backup server is running 2x10Gbps NICs and it is in VLAN with Netapp iSCSI AFF device. I have also 1Gbps network card for management purpose.  Also these two network cards are in trunk so they are doing 20Gbps now (at least I think they are doing it :-) )

Integration between Netapp and Veeam is configured using management ports of the Netapp which is in different VLAN reachable by this 1Gb NIC from backup server. 

So, I can see snapshots on Netapp but backup is being done slow (probably using 1Gb NIC). How to force Veeam to use these 10Gbps NICs instead of 1Gbps NIC? Is there a KB for this integration to read how to setup this?

Thank you.

icon

Best answer by coolsport00 24 December 2023, 02:35

View original

32 comments

Userlevel 7
Badge +20

Hi @imadam - thanks for the update. Well...sorry we weren’t able to get you to point you were hoping with your setup. Really surprised the traffic isn’t traversing over the 10Gb NICs. Yeah..I think reaching out to Veeam Support will get you sorted. They should be able to help finalize where we weren’t able to, or whatever it was we missed.

If you think about it, would be curious to hear what they say and what they did to get you going like you’re wanting.

Cheers.

We managed to push it over 10G NIC but we are not Ok how this is working. I had to shutdown 1Gb nics in order to have this traffic going onver 10Gb.

That is typically how you have to have it work when it comes to networking and storage.

Userlevel 4

@coolsport00 hi there. There is no best answer to be honest with you or all the answers are best. We were navigating over muddy waters. We managed to cover almost everything except to push traffic over wanted networks card. Veeam has here really poor management over this. Actually it doesn’t work. 
If I get some more time in next few days, I will open ticket and have Veeam to sort it out. I don’t want to loose time over that. 

Other then that, Veeam and Netapp needs to improve their integrations guides with some more details and use scenarios (there are a lof of nfs/cifs/iscsi implementations out there with multiple nic cards). I will pick one of the answers. 

BTW...back in the day (6-7yrs ago?) when I was working on getting my Proxies to do DirectSAN, I had a trying time there, as there wasn’t really much documentation on how to explicitly do so, so I do get your frustration. Thankfully, I found a HPE SE (I use Nimble) who wrote a white paper on the process and I was able to get it set up. 

Better documentation from Veeam+Netapp should be in place. What they have maybe it is Ok from their perspective when you are working day-in/day-out with same stuff but for other people doing this like once or twice a month with all other vendors what comes to your way, it time consuming. 

Userlevel 4

Hi @imadam - thanks for the update. Well...sorry we weren’t able to get you to point you were hoping with your setup. Really surprised the traffic isn’t traversing over the 10Gb NICs. Yeah..I think reaching out to Veeam Support will get you sorted. They should be able to help finalize where we weren’t able to, or whatever it was we missed.

If you think about it, would be curious to hear what they say and what they did to get you going like you’re wanting.

Cheers.

We managed to push it over 10G NIC but we are not Ok how this is working. I had to shutdown 1Gb nics in order to have this traffic going onver 10Gb.

Userlevel 7
Badge +17

@coolsport00 hi there. There is no best answer to be honest with you or all the answers are best. We were navigating over muddy waters. We managed to cover almost everything except to push traffic over wanted networks card. Veeam has here really poor management over this. Actually it doesn’t work. 
If I get some more time in next few days, I will open ticket and have Veeam to sort it out. I don’t want to loose time over that. 

Other then that, Veeam and Netapp needs to improve their integrations guides with some more details and use scenarios (there are a lof of nfs/cifs/iscsi implementations out there with multiple nic cards). I will pick one of the answers. 

BTW...back in the day (6-7yrs ago?) when I was working on getting my Proxies to do DirectSAN, I had a trying time there, as there wasn’t really much documentation on how to explicitly do so, so I do get your frustration. Thankfully, I found a HPE SE (I use Nimble) who wrote a white paper on the process and I was able to get it set up. 

Userlevel 7
Badge +17

Hi @imadam - thanks for the update. Well...sorry we weren’t able to get you to point you were hoping with your setup. Really surprised the traffic isn’t traversing over the 10Gb NICs. Yeah..I think reaching out to Veeam Support will get you sorted. They should be able to help finalize where we weren’t able to, or whatever it was we missed.

If you think about it, would be curious to hear what they say and what they did to get you going like you’re wanting.

Cheers.

Userlevel 4

@coolsport00 hi there. There is no best answer to be honest with you or all the answers are best. We were navigating over muddy waters. We managed to cover almost everything except to push traffic over wanted networks card. Veeam has here really poor management over this. Actually it doesn’t work. 
If I get some more time in next few days, I will open ticket and have Veeam to sort it out. I don’t want to loose time over that. 

Other then that, Veeam and Netapp needs to improve their integrations guides with some more details and use scenarios (there are a lof of nfs/cifs/iscsi implementations out there with multiple nic cards). I will pick one of the answers. 

Userlevel 7
Badge +17

Hi @imadam -

I am just following up here on your post to see if any of the provided comments helped you with your Netapp storage BfSS configuration? If any of them helped, we ask you mark one as ‘Best Answer’ so others with a similar question/issue/query may benefit from your post. If you still have questions, please don’t hesitate to ask!

Thank you.

Userlevel 4

@MatzeB iSCSI is used only for SnapManager. We found another VLAN which is NFS. We had to dig network configuration as well. I see traffic going over 10Gbps but I am getting between 100 and 140 MB/s thruput. Bottleneck reported by Veeam is Source. 

We are now working on CIFS shares. There are few things to be done by Netapp on device.

Going over these documents related to Veeam&Netapp I found that they should be updated with some information how to configure and make sure backup is working on Netapp in wanted way 😊. It would save a lot of time. Netapp with Pure is one of the leading platforms.

Userlevel 5
Badge +3

Okay just some ideas.

Your Netapp has 10GB - fine. Your Server has a 1GB let’s call “Management”, with the Default Gateway set and a additional 10GB interface with a static IP without gateway.

Do the following:

  • The 10GB Interface on Server and Netapp are on the same network, let’s call iSCSI VLAN? if Not do this
  • The Netapp is added under storage Systems?
  • iSCSI needs to be started on your windows Host
  • Because i think you use MPIO for iSCSI on Netapp, you have to install MPIO Role on Windows. After this restart. Then System Settings, MPIO, Add Support for iSCSI, dont forget save. Restart
  • You need to create a IGROUP on the netapp with all IQNs of your Veeam Proxy/Servers, mostly in your case this one Server. Dont map any LUNs to it, just create it.
  • On windows Server establish iSCSI Connection to your Netapp iSCSI Target using “quick connect”.
  • Now run rescan on the windows server (veeam - infrastructure - windows server - rescan)
  • Run Rescan on the storage system (rightklick - rescan)
  • Try a Test Backup

 

Pro Tip: Temporary disable failover to network in Proxy and set Storage mode to direct SAN Access. Than the job will fail instantly install using 1GB.

 

What is the job showing as current transport mode? doesnt mather if 1GB or 10GB….do you see NBD? or SAN or retrieving from ontap snapshot?

 

Matze

Userlevel 4

I have used both FC & iSCSI over the yrs with success as well. There’s config in BOTH FC & iSCSI needed. There isn’t just setup FC and “it just works”. It seems like the real issue is not that it’s not working, but just getting your traffic over the 10Gb NICs? Just go through all the steps we’ve provided, as well as Guides, and see where the mishap is. I think then traffic will traverse those HBAs like you’re wanting. 

Yep, most of the things are working. What I am trying to retrieve backup from Netapp 10Gbps ports using my 10Gbps NICs on backup servers. I have cca 30TB to backup.

For some reasons traffics goes only over 1Gbps NIC. 

Userlevel 4

@MatzeB I am making some progress 😊. At moment looking at https://www.veeam.com/blog/nas-backup-remote-netapp-fas-systems-vss-integration.html to see what is required on Netapp side. 

Questions: Customer has CPU licences. Is there anything else required as far as Veeam licensing to backup NFS and Cifs shares from Netapp?

Userlevel 5
Badge +3

@imadam could you solve the issue in the meantime? If not maybe we can have a deeper look into your setup?

Regards

Matze

Userlevel 7
Badge +6

I’ve used direct SAN with SAS/DAS and it works great.  I believe I’ve also done it with ISCSI, and it was also great.  All of my environments are too small for FC so I haven’t tried that, but I expect nothing less than great.  LOL

Userlevel 7
Badge +17

I have used both FC & iSCSI over the yrs with success as well. There’s config in BOTH FC & iSCSI needed. There isn’t just setup FC and “it just works”. It seems like the real issue is not that it’s not working, but just getting your traffic over the 10Gb NICs? Just go through all the steps we’ve provided, as well as Guides, and see where the mishap is. I think then traffic will traverse those HBAs like you’re wanting. 

Userlevel 7
Badge +20

If you can use FC or iSCSI those are the best routes to go with storage. Having the storage integration in Veeam also helps as well. I have used both methods and they are great.

Userlevel 4

Direct Access, at least over FC, works great. We have months and years of successful backup. I didn’t bother much with setup, it was ease. Setup here is different:

 

  • NFS is primary protocol for VMs over 10Gbps NIC
  • iSCSI is being used for SnapManager over same 10Gbps NIC
  • CIFS is being served over 1Gbps NIC

This setup works for pretty much 18 to 20 years without issue. I just need to see how to aim Veeam at this in best possible way. 

I will probably recommend to get FAS2820 and get Snapmirror so Veeam can be used to manage snapshots. 

Userlevel 7
Badge +17

If you have some to do that with..yes, I highly recommend doing so.

Hmm..again, not sure a registry setting is needed here. But, hard to say definitively without being able to see everything you have configured. 

For virtual appliance, you bypass your storage and use the Proxy VMs for Backup processing. Just have a Proxy VM in each vSphere Cluster and you should be good….if you go that route. But, it sounds like you’re mostly interested in utilizing direct access to your array. Keep us posted how your testing goes, and if there’s any further info we can provide.

Userlevel 4

Got few Netapps around so I will play with setup. With FC is really easy and works pretty good.  So, now I will read out BP and make some plans how to push this. I have seen some registry setting to force traffic over 10Gbps NICs. 

Userlevel 7
Badge +17

As far as “best practice” goes. Which Transport Mode you choose is dependent on your environment - what h/w you have, what editions of software (Veeam, NetApp, etc) you have which dictate capabilities, org expertise, etc. Depending on the Transport Mode you choose to go with, Veeam does have recommended Best Practices on how to implement, generally speaking, in their BP Guide. But, details on how to do so is dependent on the vendor (i.e. NetApp for DirectSAN or NFS, at least on the storage side of things).

For me, using Nimble, when I implemented DirectSAN via Windows several yrs ago, there was this Guide I followed & it was created I believe by a Nimble SE. And, though it’s several yrs old now, pretty much everything in it still applies.

As I mentioned above, you could try Virtual Appliance (hotadd) Transport mode. It works really well. Sometimes, for my Replication jobs, I can get up to 3GB read speed. Most times it’s at about 300-700MB, but still...that’s comparable to DirectSAN/BfSS speeds. 

Userlevel 7
Badge +17

I found this article, tho several yrs old, discussing NFS configurations on NetApp 

Userlevel 7
Badge +17

You can still do Direct Storage, but use DirectNFS. Direct Storage has two modes to it → DirectSAN and DirectNFS. I've not configured DirectNFS personally. 

You could use hotadd (Virtual Appliance) by creating VM proxies in vSphere. That backup method is pretty solid & has decent backup speeds.

Userlevel 4

I have gained access to Netapp device so I have collected following info:
 

  • NFS for VMs over 10Gbps NIC (all VMs are over NFS presented to Vmware)
  • CIFS shares from Netapp over 1Gbps NIC
  • SnapManager/SnapDrives over iSCSI 10Gbps NIC

So I guess Direct Storage Access is out of question since those VMs are exported over NFS. What would be best practice from Veeam to backup these? 

Aim is to backup all of this over 10Gbps NIC (including CIFS). 

Userlevel 7
Badge +17

Ok...thank you for the additional info. Your VBR server is configured for iSCSI correct? MPIO is only part of it. After your Windows server is configured, on your array you need to add the IQN of your server to the Volumes on your array. On Nimble, you not only add the IQN as ACL access, but additionally add the type of access - Snapshot only, Volume only, or Snapshot & Volume. For BfSS you only need Snapshot access. For DirectSAN you need Volume only (BTW BfSS & DirectSAN are different). Did you add IQN ACL access to your NetApp Volumes? 

Userlevel 4

 I still am not fully understanding what you’re trying to achieve.

[imadam] I guess it is hard since I haven’t laydown whole picture. I am also painting whole picture for myself 😊.

 I understand you have a NetApp configured for iSCSI.

[imadam] I have iSCSI over 10Gbps and Netapp has 10Gbps over iSCSI. Also, I am trying to figure out other stuff on this Netapp. There is CIFS shares on Netapp as well as NFS shared for Vmware VMs. Once I acquire full access to Netapp I will see how this is configured.

And you have snapshots on your NetApp.

[imadam] I see all snapshots. Test using NBD backing up one VM work Ok. Once I switch to direct access, it fails.

 

But, what are you backing up?

[imadam] I am trying to backup VMs as well as CIFS shares.

VMs, I presume? And, you want to use Backup From Storage Snapshots?

[imadam] Yes, I done this before but Netapp was over FC and this was working right out of the box. However, this is bit more complicated with iSCSI as far as I can see 😊.

 

 It is generally configured by default, but in your Job > Storage section > Advanced button > Intergration tab, I assume the BfSS is enabled?

[imadam] This is enabled.

 And, you see BfSS mode for each VM in you Backup Job being used?...not nbd mode? 

[imadam] As I wrote about if NBD this test VM works. If I switch to direct access (BfSS I guess), backup fails.

Maybe everything is configured ok, but you’re just wanting to force 10GB NICs on your Veeam server to be used/seen by your SAN?

[imadam] Maybe, that what I am trying to figure out. Simple math tells me that Veeam is not using 10Gbps

 

When I was using physical Windows servers as Proxies and wanted to use my HBAs (10Gb), I assigned IPs of my storage network on those and used a different (mgmt) subnet for my 1Gb NIC.

[imadam] This is same setup. My 10Gbps NICs are acting as HBA in this case while 1Gbps are only for mgmt purpose.

 

 So, my storage network used those 10Gb HBAs based off the subnet I used with my SAN (I use Nimble, and iSCSI). All you should need to do is use the subnet on your HBAs your NetApp SAN is on.

[imadam] We have separate subnets.

If NetApp is like my Nimbles...

[imadam] I guess they are since they were in court over some intellectual property dispute 😊.

 

I connect to my array UI via a mgmt subnet to configure it, but my array’s storage uses a different subnet for storage tasks → replication, etc. I have 2 separate switches for each subnet.

[imadam] here is one “big” switch with modules but separated by VLAN

Have you looked at Veeam’s Storage Integration Guide?

[imadam] Yes, I have. Also, I am looking at another document provided by Veeam for Netapp integration. This document should be upgraded to cover cases where you do all kind of stuff on Netapp 😊.

Specifically, look at “Requirements for Proxies”; “General Requirements and Limitations”; “Adding NetApp ONTAP”; and “Configuring Backup From Storage Snapshots” ….and those just for starters.

[imadam] I will do double check on this.  I am trying to figure out if they are using iSCSI at all. All VMs are over NFS as per naming conventions I am able to figure out on Veeam.

Userlevel 7
Badge +17

Hi @imadam - I still am not fully understanding what you’re trying to achieve. I understand you have a NetApp configured for iSCSI. And you have snapshots on your NetApp. But, what are you backing up? VMs, I presume? And, you want to use Backup From Storage Snapshots? It is generally configured by default, but in your Job > Storage section > Advanced button > Intergration tab, I assume the BfSS is enabled? And, you see BfSS mode for each VM in you Backup Job being used?...not nbd mode? 

Maybe everything is configured ok, but you’re just wanting to force 10GB NICs on your Veeam server to be used/seen by your SAN? When I was using physical Windows servers as Proxies and wanted to use my HBAs (10Gb), I assigned IPs of my storage network on those and used a different (mgmt) subnet for my 1Gb NIC. So, my storage network used those 10Gb HBAs based off the subnet I used with my SAN (I use Nimble, and iSCSI). All you should need to do is use the subnet on your HBAs your NetApp SAN is on. If NetApp is like my Nimbles...I connect to my array UI via a mgmt subnet to configure it, but my array’s storage uses a different subnet for storage tasks → replication, etc. I have 2 separate switches for each subnet.

Have you looked at Veeam’s Storage Integration Guide? Specifically, look at “Requirements for Proxies”; “General Requirements and Limitations”; “Adding NetApp ONTAP”; and “Configuring Backup From Storage Snapshots” ….and those just for starters.

Let me know if I’m not correct in my assumptions of what you’re trying to achieve.

Comment