Hey all..thought I’d come back and share I finally got this to work. I had to pick my ‘storage implementation on Linux steps’ apart piece by piece to see what step(s) the issue laid. I was able to get it all to work fantastic when using only 1 adapter. So as I shared, the issue obviously had to do with multipathing, or at least with how multipathing works in Linux. Still learning this!
Nimble has a “toolkit” installer for Linux which includes their “network connection manager” tool to assist with multipathing. In their documentation to partition/filesystem connected devices, they state to use the device: /dev/nimblestorage/<device-name> , which is basically a symbolic link of sorts similar to /dev/mapper/<device-name>, which then points to a dm-# device. Though fdisk displays an annoying warning message after writing a partition to a device, which doesn’t occur when using a local disk or connecting to a san-based LUN using 1 adapter, I think the partition still gets created fine. Why the fdisk warning/error occurs, I still don’t know. For my own anal-ness and sanity, I switched this task to using parted, which also allows me to put a GPT table on the disk as well, so it works out better 😊
To place a filesystem on the device, I can’t use the Nimble-based device name or the “Resource or device is busy” error occurs. And, when I attempted to not use the Nimble name in my testing, but still getting the error, I was using the wrong /dev/mapper/<device-name> , thus why I was getting the “Resource or device busy” error for it too. The device was basically a name “higher” up in the device name “tree”...analogous to using /dev/sdb instead of /dev/sdb1. After partitioning the device, the Linux device mapper adds a -part1 at the end of the device name. I wasn’t using that (my tab-completion didn’t add it, so basically somewhat of a typing error on my part). When I did use the /dev/mapper/<dev-name>-part1 device to place a filesystem on the device, it worked fine. I was then able to proceed as normal mounting the device, then adding my server to Veeam and putting backups on the storage (Repo).
Thanks again for the input. You all are fantastic!