Resize Multipath Volumes Used as a Veeam Repository in Linux OS


Userlevel 7
Badge +17

Happy Thursday Community! Thought I'd share another Linux process today – how to resize a Linux volume used as a Veeam Repository in Linux OS, specifically increasing its size. This also is in reference to using an array as your Volume-backed storage. Logical Volume Manager (LVM) processes won't be discussed. Additionally, though there is a process to decrease Volume sizes as well, I also won't be discussing that process either. When wanting to decrease Volume size, I highly recommend creating a new Volume of the desired size, then copy your data to it to prevent potential data corruption.

As most of you probably know, increasing Volume sizes within Windows is pretty straight forward → increase LUN storage on your array, then rescan your disks in Windows Disk Manager, then Extend the Volume. It takes a minimal amount of time to perform. Within Linux, there are a few more tasks needing to be performed than in Windows to get your Volume size increased. I'll share exactly what those steps are below.

  • The first thing you need to do is go onto your storage array and increase your LUN size. How to perform this step is dependent on your array. But, for most arrays this process is pretty straighforward
  • Second, log onto your Linux Veeam Repository server and unmount the Repository Volume you're wanting to increase:
    sudo umount /path/to/vol/mountpoint

     

  • When using Linux OS for a Veeam Repository, you should be using multipathing to connect to your array. I discuss this process a bit in my Taking the Fear out of Implementing Linux Veeam Repositories post. Multipath devices appear as single logical devices in Linux, typically by default named like mapthX . Personally, I change the logical path names in the multipath.conf file to make the Volume names more descriptive for what I use them for. Each mapped device should have at least 4 paths to the array LUNs you're using as Veeam Repository Volumes, denoted by the file path, /dev/sdX . You need to retrieve each device used for the path for the next step in this process. To get each device used, run the below command:

    sudo multiipath -ll
    Multipath Devices

     

  • You'll then need to rescan each scsi device noted in the above step to detect the increased Volume size of the underlying LUN. This is done by 'writing a 1' to the rescan file for each device

    Device Rescan

     

  • Next, you need to resize the multipath device. The command used below instructs the multipath daemon to update the size of the multipath device based on the resized scsi device paths. Running the command should return ok. If something in the command is incorrect, fail will be returned (NOTE: there is no space between the "k" and single quote in the command below)

    Multipath Resize

     

  • You now need to re-mount the Volume

    sudo mount /dev/mapper/<vol-name-part1> /path/to/vol/mountpoint

     

  • The last step in the process is to make the filesystem aware of the size increase. If you're using XFS, as Veeam recommends, you'll use the command as noted below. For other filesystems, use the resize2fs command

    Filesystem Resize

    To make sure everything looks ok, re-check your disk free space:

    df -hT


    You then should rescan your Linux server in the Veeam Console so Veeam is aware of the additional space.

    Though not required, after you perform the above steps, if you want to check your Volume (filesystem) has no errors, you can run the following command:

    sudo e2fsck -f /dev/mapper/vol-name-part1

     

Hopefully the above process helps you if or when the time comes you may need to increase the size of your Veeam Repository if you use Linux OS. If you have any questions about the above, please comment below.


18 comments

Userlevel 7
Badge +20

Really great post Shane.  I love learning Linux stuff now getting more into it so this will make resizing easier.  😁

Userlevel 7
Badge +17

Thanks Chris. Hope you find it useful when needed.

Userlevel 7
Badge +7

Thanks @coolsport00 

I’ve been playing around with linux partitions recently so this will come in handy

Userlevel 7
Badge +17

Thanks @dips . Hope this helps you bud.

Userlevel 7
Badge +6

Great post, @coolsport00 ! 👏🏻

Userlevel 7
Badge +17

Thank you @leduardoserrano 

Userlevel 7
Badge +9

Very well crafted @coolsport00

Userlevel 7
Badge +17

Appreciate it Christian 

Userlevel 7
Badge +6

Amazing @coolsport00 thank you so much

Userlevel 7
Badge +17

Thanks Moustafa

Userlevel 7
Badge +10

This and the other post @coolsport00 are fantastic posts. Great work!

Userlevel 7
Badge +17

Appreciate it Rick!

I’m running into an issue - the filesystem does NOT “grow”.

I followed all the steps (multiple times)…  while the disk “sees” the additional space (as shown in the sudo multipath -ll).  The sudo multipathd -k’resize map xxxxx’ command returns ok.  But when I do the xfs_growfs, it returns to a command prompt, but the “data blocks changed” line (shown in the screenshot above) never appears and the filesystem does NOT grow.

Was there a step missed?

Userlevel 7
Badge +17

Hi @william.berhorst -

Interestingly enough, though I’ve resized a few Volumes using the process I posted, I then recently had a SAN Volume in which I experienced the same issue. When running the lsblk cmd, I see the increased space for my Volume, and running the next cmds worked, as you stated were able to do, but the filesystem wouldn’t grow.

What I had to do was delete the partition using the parted cmd. When researching online how to resolve this issue, I read time and again you don’t lose data when deleting a partition, but you SHOULD have a Volume/data recovery plan just in case! Then re-add the partition using parted, then attempt to increase the filesystem using the xfs_grow cmd. I didn’t find out in my research why this issue happens...still perplexes me. But, I was able to resolve it.

First, before modifying the Volume, unmount it if it isn’t already. Then run:

sudo parted /dev/mapper/<volume-name>

print (view your Volume partition)

rm 1 (assuming your Volume is using just a single partition and it’s on “1”)

print (verify partition is removed)

mkpart primary 2048s 100%

print (verify the partition was recreated)

sudo mount /dev/mapper/<volume-name-part1> /mount-directory (remount your fs)

sudo xfs_growfs -d /dev/mapper/<volume-name-part1> (hopefully your fs expands now)

df -hT (verify Volume has the added space)

Again, I advise to make a backup of your data before performing the above steps. They worked for me, and I didn’t lose any of my data. Let me know how it goes.

I have included a picture of my lsblk command.

You should see two devices with 75TB of space, but only 50TB assigned to the partition.  These are the ones I’m wanting to “grow”.

Again, using the “sudo xfs_growfs /dev/mapper/mpathc-part1” command doesn’t “grow” the filesystem.

NOTE - I see in your most recent comment that you have a “-d” in your command.  So, should the command be “sudo xfs_growfs -d /dev/mapper/mpathc-part1” ????

 

Userlevel 7
Badge +17

Did you attempt the process I shared?

The -d parameter after the xfs_grow cmd specifies the data should be grown to all the space available to the filesystem (see here).

I did not try to the process you shared…  too scared of deleting/removing a partition (even if no data loss occurs).

The parted command kinda scares me off this whole process…  when I first read this post, figured I could “just resize/extend” the space…  this is getting way above my Linux/Ubuntu experience...

Userlevel 7
Badge +17

Understood. Not sure what else to suggest. “Normally”...the cmds I shared in my post should work.

Comment