First a general comment. I notice that you are attempting to format /dev/sdb directly, which is the block device without any partition. While this is possible, in general, I would not recommend it. You should probably use fdisk or parted to create a partition on the device first, and then format that partition. For example, if you create a single parition for the entire device, you would format /dev/sdb1.
Note that there’s nothing terribly wrong with using the block device directly, but it can cause some strange behavior such as poor stripe alignment and some disk tools may not recognize that the volume has anything on it.
Now, on to your message, I’m assuming you are asking about the log stripe messages? Like any filesystem, XFS has metadata and, when laying out that metadata on a block device that is made of multiple components (i.e. a RAID volume), Linux exposes information about that layout so that XFS can optimize the layout of that metadata to match the parameters used in the creation of the RAID and thus maximize the performance.
In your case, XFS is detecting that the underlying RAID volume is using a stripe size of 1MB, but this is larger than the maximum size that XFS supports for the log stripe, so XFS is using a reasonable default value instead.
In theory, the “perfect” performance would be achieved by using a smaller stripe size for the underlying RAID, assuming the RAID supports that, but the actual impact of this is probably nearly immeasurable, as Linux is pretty smart about how it handles stripe RMW cycles, and there are other performance impacts to smaller stripe sizes as well.
Overall, I wouldn’t worry about this message.
Yup, I had this error when experimenting with Raid Stripe sizes. Just stick with 256 both with Raid controller and with xfs.
It’s a bit off-topic but I wouldn’t recommend using iSCSI via QNAP. Why? Because I did it some years ago (for a vbo backup) and the result was that due to a failure, I’ve lost the whole volume…
Also, some years ago there was zero data left when someone has removed one of two disks of a RAID-1, how could this happen if you have RAID-1?
Since then I do have zero trust into those devices and hence my advice. Good luck!
Some great info here, especially by @tsightler . I used this site by Paolo to get my LHR configured & working. I really like his detail & yet simplicity in his 3-part post.
Cheers!
I had the problem of partitioning disks larger than 2 TB
Because fdisk creates an MBR table @Link State . MBR has a 2TB limit. You need to use the gdisk tool to create GPT table on the disk for disk sizes > 2TB. Actually, parted should work as well, which it appears you’ve done. But somewhere in the process, you created a MBR table.