Question

Best practice in formatting XFS over iSCSI NAS?

  • 22 February 2023
  • 7 comments
  • 1239 views

Userlevel 7
Badge +2

Hi Folks,

I’m trying to follow the below to create the XFS partition for the hardened Linux repo using  article:https://helpcenter.veeam.com/docs/backup/vsphere/backup_repository_block_cloning.html?ver=120#configuring-a-linux-repository

 

and the Step #1 throws me some warning or error:

repouser@BKPSVR01:~$ sudo mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sdb -f

log stripe unit (1048576 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/sdb isize=512 agcount=32, agsize=28672000 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=917504000, imaxpct=5
= sunit=256 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=448000, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Is that due to the QNAP NAS limitation or something I may have missed?

Note: I’m not a Linux guy, hence posting this thread here 


7 comments

Userlevel 7
Badge +20

Try following this and see - https://www.starwindsoftware.com/blog/veeam-hardened-linux-repository-part-1

 

Userlevel 2
Badge +1

First a general comment.  I notice that you are attempting to format /dev/sdb directly, which is the block device without any partition.  While this is possible, in general, I would not recommend it.  You should probably use fdisk or parted to create a partition on the device first, and then format that partition.  For example, if you create a single parition for the entire device, you would format /dev/sdb1.

 

Note that there’s nothing terribly wrong with using the block device directly, but it can cause some strange behavior such as poor stripe alignment and some disk tools may not recognize that the volume has anything on it.

Now, on to your message, I’m assuming you are asking about the log stripe messages?  Like any filesystem, XFS has metadata and, when laying out that metadata on a block device that is made of multiple components (i.e. a RAID volume), Linux exposes information about that layout so that XFS can optimize the layout of that metadata to match the parameters used in the creation of the RAID and thus maximize the performance.

In your case, XFS is detecting that the underlying RAID volume is using a stripe size of 1MB, but this is larger than the maximum size that XFS supports for the log stripe, so XFS is using a reasonable default value instead.

In theory, the “perfect” performance would be achieved by using a smaller stripe size for the underlying RAID, assuming the RAID supports that, but the actual impact of this is probably nearly immeasurable, as Linux is pretty smart about how it handles stripe RMW cycles, and there are other performance impacts to smaller stripe sizes as well.

Overall, I wouldn’t worry about this message.

Userlevel 2
Badge

Yup, I had this error when experimenting with Raid Stripe sizes. Just stick with 256 both with Raid controller and with xfs.

Userlevel 4
Badge +1

It’s a bit off-topic but I wouldn’t recommend using iSCSI via QNAP. Why? Because I did it some years ago (for a vbo backup) and the result was that due to a failure, I’ve lost the whole volume… 

Also, some years ago there was zero data left when someone has removed one of two disks of a RAID-1, how could this happen if you have RAID-1?

 

Since then I do have zero trust into those devices and hence my advice. Good luck!

Userlevel 7
Badge +17

Some great info here, especially by @tsightler . I used this site by Paolo to get my LHR configured & working. I really like his detail & yet simplicity in his 3-part post. 

Cheers!

Userlevel 7
Badge +8

I had the problem of partitioning disks larger than 2 TB

 

 GPT partition Vmdk e\o LUN

https://www.cyberciti.biz/tips/fdisk-unable-to-create-partition-greater-2tb.html

fdisk -l /dev/sda

 

Linux Create 6TB partition size

parted /dev/sdb

 

mklabel gpt

 

unit TB

mkpart primary 0TB 6TB

print

 

Quit

 

I am using the same command on ubuntu successfully

sudo mkfs.xfs –f -b size=4096 -m reflink=1,crc=1 /dev/sda

 

Userlevel 7
Badge +17

Because fdisk creates an MBR table @Link State . MBR has a 2TB limit. You need to use the gdisk tool to create GPT table on the disk for disk sizes > 2TB. Actually, parted should work as well, which it appears you’ve done. But somewhere in the process, you created a MBR table.

Cheers

Comment