Skip to main content

Happy Thursday Community! Thought I'd share another Linux process today – how to resize a Linux volume used as a Veeam Repository in Linux OS, specifically increasing its size. This also is in reference to using an array as your Volume-backed storage. Logical Volume Manager (LVM) processes won't be discussed. Additionally, though there is a process to decrease Volume sizes as well, I also won't be discussing that process either. When wanting to decrease Volume size, I highly recommend creating a new Volume of the desired size, then copy your data to it to prevent potential data corruption.

As most of you probably know, increasing Volume sizes within Windows is pretty straight forward → increase LUN storage on your array, then rescan your disks in Windows Disk Manager, then Extend the Volume. It takes a minimal amount of time to perform. Within Linux, there are a few more tasks needing to be performed than in Windows to get your Volume size increased. I'll share exactly what those steps are below.

  • The first thing you need to do is go onto your storage array and increase your LUN size. How to perform this step is dependent on your array. But, for most arrays this process is pretty straighforward
  • Second, log onto your Linux Veeam Repository server and unmount the Repository Volume you're wanting to increase:
    sudo umount /path/to/vol/mountpoint

     

  • When using Linux OS for a Veeam Repository, you should be using multipathing to connect to your array. I discuss this process a bit in my Taking the Fear out of Implementing Linux Veeam Repositories post. Multipath devices appear as single logical devices in Linux, typically by default named like mapthX . Personally, I change the logical path names in the multipath.conf file to make the Volume names more descriptive for what I use them for. Each mapped device should have at least 4 paths to the array LUNs you're using as Veeam Repository Volumes, denoted by the file path, /dev/sdX . You need to retrieve each device used for the path for the next step in this process. To get each device used, run the below command:

    sudo multiipath -ll
    Multipath Devices

     

  • You'll then need to rescan each scsi device noted in the above step to detect the increased Volume size of the underlying LUN. This is done by 'writing a 1' to the rescan file for each device

    Device Rescan

     

  • Next, you need to resize the multipath device. The command used below instructs the multipath daemon to update the size of the multipath device based on the resized scsi device paths. Running the command should return ok. If something in the command is incorrect, fail will be returned (NOTE: there is no space between the "k" and single quote in the command below)

    Multipath Resize

     

  • You now need to re-mount the Volume

    sudo mount /dev/mapper/<vol-name-part1> /path/to/vol/mountpoint

     

  • The last step in the process is to make the filesystem aware of the size increase. If you're using XFS, as Veeam recommends, you'll use the command as noted below. For other filesystems, use the resize2fs command

    Filesystem Resize

    To make sure everything looks ok, re-check your disk free space:

    df -hT


    You then should rescan your Linux server in the Veeam Console so Veeam is aware of the additional space.

    Though not required, after you perform the above steps, if you want to check your Volume (filesystem) has no errors, you can run the following command:

    sudo e2fsck -f /dev/mapper/vol-name-part1

     

Hopefully the above process helps you if or when the time comes you may need to increase the size of your Veeam Repository if you use Linux OS. If you have any questions about the above, please comment below.

Really great post Shane.  I love learning Linux stuff now getting more into it so this will make resizing easier.  😁


Thanks Chris. Hope you find it useful when needed.


Thanks @coolsport00 

I’ve been playing around with linux partitions recently so this will come in handy


Thanks @dips . Hope this helps you bud.


Great post, @coolsport00 ! 👏🏻


Thank you @leduardoserrano 


Very well crafted @coolsport00


Appreciate it Christian 


Amazing @coolsport00 thank you so much


Thanks Moustafa


This and the other post @coolsport00 are fantastic posts. Great work!


Appreciate it Rick!


I’m running into an issue - the filesystem does NOT “grow”.

I followed all the steps (multiple times)…  while the disk “sees” the additional space (as shown in the sudo multipath -ll).  The sudo multipathd -k’resize map xxxxx’ command returns ok.  But when I do the xfs_growfs, it returns to a command prompt, but the “data blocks changed” line (shown in the screenshot above) never appears and the filesystem does NOT grow.

Was there a step missed?


Hi @william.berhorst -

Interestingly enough, though I’ve resized a few Volumes using the process I posted, I then recently had a SAN Volume in which I experienced the same issue. When running the lsblk cmd, I see the increased space for my Volume, and running the next cmds worked, as you stated were able to do, but the filesystem wouldn’t grow.

What I had to do was delete the partition using the parted cmd. When researching online how to resolve this issue, I read time and again you don’t lose data when deleting a partition, but you SHOULD have a Volume/data recovery plan just in case! Then re-add the partition using parted, then attempt to increase the filesystem using the xfs_grow cmd. I didn’t find out in my research why this issue happens...still perplexes me. But, I was able to resolve it.

First, before modifying the Volume, unmount it if it isn’t already. Then run:

sudo parted /dev/mapper/<volume-name>

print (view your Volume partition)

rm 1 (assuming your Volume is using just a single partition and it’s on “1”)

print (verify partition is removed)

mkpart primary 2048s 100%

print (verify the partition was recreated)

sudo mount /dev/mapper/<volume-name-part1> /mount-directory (remount your fs)

sudo xfs_growfs -d /dev/mapper/<volume-name-part1> (hopefully your fs expands now)

df -hT (verify Volume has the added space)

Again, I advise to make a backup of your data before performing the above steps. They worked for me, and I didn’t lose any of my data. Let me know how it goes.


I have included a picture of my lsblk command.

You should see two devices with 75TB of space, but only 50TB assigned to the partition.  These are the ones I’m wanting to “grow”.

Again, using the “sudo xfs_growfs /dev/mapper/mpathc-part1” command doesn’t “grow” the filesystem.

NOTE - I see in your most recent comment that you have a “-d” in your command.  So, should the command be “sudo xfs_growfs -d /dev/mapper/mpathc-part1” ????

 


Did you attempt the process I shared?

The -d parameter after the xfs_grow cmd specifies the data should be grown to all the space available to the filesystem (see here).


I did not try to the process you shared…  too scared of deleting/removing a partition (even if no data loss occurs).

The parted command kinda scares me off this whole process…  when I first read this post, figured I could “just resize/extend” the space…  this is getting way above my Linux/Ubuntu experience...


Understood. Not sure what else to suggest. “Normally”...the cmds I shared in my post should work.


@william.berhorst -

I found a different way to resize your Volume without the risk of removing/re-adding your device partition. You still go into parted, but instead do a resizepart operation. The underlying Volume (device) partition needs to be resized, along with the devices and multipath before the additional space shows in the filesystem & OS. Not sure why I was able to do the operation initially without the need to do this. Anyway, the process is as follows:
sudo multipath -l  ← list all devices the multipath volume uses

sudo bash -c ‘echo 1 > /sys/block/sdX/device/rescan’ ← do this for each sdX device

sudo multipathd -k’resize map <volume-repo-name>’ ← do on volume, not partition

lsblk | grep disk ← run this just to verify the additional space has been added & is seen

sudo parted /dev/mapper/<volume-repo> ← go into the parted utility

print ← when prompted, choose to “Fix” (see screenshot below)

resizepart 1  ← when prompted for size to increase to, look at size parted “sees” as the additional total space and type in same amount when prompted; see ex. below:

Parted Resize Partition


print ← do again to verify space on partition is increased

q ← exit parted utility

sudo mount -a ← remount filesystem

sudo xfs_growfs -d /dev/mapper/<vol-name-part1> ← resize filesystem

df -hT ← verify fielsystem size has been increased

 

Hope this helps!


Good information. Many of us are heavy into Windows only. I’m going to try Linux next so I’ll be keeping this handy!


Thanks Scott...hope it’s useful for you.


@coolsport00 can i shrink a linux repository and using this space resize another repo? I have a linux server which got 3 repo’s with same size 144TB, out of 3, 2 repo’s are full and left with 2TB free space where the 3rd Repo got 40TB free space. i want to shrink the 3rd Repo and get 10TB space out and expand the 2 repo’s with 5TB each. is this possible? if yes how can i do it.

my understanding is from the Linux UI stop the repo which got 40TB space then resize it to 30TB and then do same steps on other 2 repo by resizing and expanding them with 5TB each. 

Thanks.


@santhoshK -

You can shrink resize a Linux volume (Repo) but is not recommended as it could cause corruption of your data. I haven't done a shrink resize operation myself though. If you wanted to attempt, you'd have to search around on how to do so.

What I do when I've had to reduce the size of mine is I create a whole new Volume on my back end storage, present it to Linux, then move the data over, then remove the old Volume/Repo from Linux. It's just safer that way. 


@coolsport00 thanks for the info. I know shrinking a Volume is not possible in SAN world as this is a DAS i just had this thought. don’t want to take a risk 🙂. it’s true shrinking a volume will lead data corruption. 

Thanks for the advice. have a good day


Comment