actually no one has any experiences with XFS as Veeam repository to share?
Not yet, but I have a few customers VERY interested and will be testing soon with them. I will update here.
No experiences on large scale just some tests on old storage array, works pretty well.
I’m excited to build a large (many PB+ ) SOF xfs based, 2021 will be fun!
Thanks for sharing interesting links @vNote42 , from my pov XFS/Reflink is more old and stable than reFs when it appears.
I had tried to set up XFS as a Veeam repository and failed. No real help from the support either and I had to settle with NFS based repository. But this was with v10.
I had tried to set up XFS as a Veeam repository and failed. No real help from the support either and I had to settle with NFS based repository. But this was with v10.
Thanks for the information @gulzarshaikhveeam. What did not work?
I had tried to set up XFS as a Veeam repository and failed. No real help from the support either and I had to settle with NFS based repository. But this was with v10.
Thanks for the information @gulzarshaikhveeam. What did not work?
It used to fail to get added as a repository. The support tried various steps but it did not work ATT. Case # 04450930
No experiences on large scale just some tests on old storage array, works pretty well.
I’m excited to build a large (many PB+ ) SOF xfs based, 2021 will be fun!
Thanks for sharing interesting links @vNote42 , from my pov XFS/Reflink is more old and stable than reFs when it appears.
It is true for a while the refs driver was buggy :(
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
Thanks Chris!
What was the problem(s) when mixing XFS with ReFS in a SOBR?
I have successfully tested XFS repos in a POC.
One of my customers wants to exchange Windows repos with Linux.
Immutability flag is the most convincing argument.
We’ve had ~30 Ubuntu 20.04 repo’s in production since v10 launch in 2020, all with reflink enabled; the majority being at client sites and then a few ~500TB extents in sobr’s at our data centers; they all have been rock solid, which is to be expected from linux.
stability - no issues at all
performance (over time) - hasn’t changed in a year; only limited by the disk iops available from your hardware
needs for troubleshooting - no troubleshooting over the past year
administrative effort - linux experience or being a fast learner is important for the initial design and deployment; hardware monitoring/testing to determine the number of concurrent jobs that your hardware can handle requires some effort in the beginning but that applies to any OS; after that, cron jobs take care of updates and we schedule a manual reboot of the repo’s whenever an update notifies us that it requires it
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
Thanks Chris!
What was the problem(s) when mixing XFS with ReFS in a SOBR?
Just that XFS and ReFS did not play well together so it is best to have them separate. Been rock solid since.
We’ve had ~30 Ubuntu 20.04 repo’s in production since v10 launch in 2020, all with reflink enabled; the majority being at client sites and then a few ~500TB extents in sobr’s at our data centers; they all have been rock solid, which is to be expected from linux.
stability - no issues at all
performance (over time) - hasn’t changed in a year; only limited by the disk iops available from your hardware
needs for troubleshooting - no troubleshooting over the past year
administrative effort - linux experience or being a fast learner is important for the initial design and deployment; hardware monitoring/testing to determine the number of concurrent jobs that your hardware can handle requires some effort in the beginning but that applies to any OS; after that, cron jobs take care of updates and we schedule a manual reboot of the repo’s whenever an update notifies us that it requires it
Thank you @gtelnet for your very detailed answer! I am very relieved, feedback about XFS was that positive. So v11 with hardened repos can come!
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
Thanks Chris!
What was the problem(s) when mixing XFS with ReFS in a SOBR?
Just that XFS and ReFS did not play well together so it is best to have them separate. Been rock solid since.
Thanks @Chris.Childerhose ! So not mixing ReFS and XFS in a SOBR will be part of our best practices!
For those, interested in experiences with XFS, check this new post:
Good quality and performance of XFS in real world deployment.
I have two XFS based repos each with a ~445TB extent. Excellent performance and stability. This is my preferred filesystem.
I have two XFS based repos each with a ~445TB extent. Excellent performance and stability. This is my preferred filesystem.
Thanks for your answer! It is good to know, big extents do not cause problems!
I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
Thanks Chris!
What was the problem(s) when mixing XFS with ReFS in a SOBR?
One thing to note is the block size for ReFS is recommended on Veeam to be 64K and XFS is 4K, so I believe that alone would be a problem if trying to mix in one SOBR.
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
Thanks Chris!
What was the problem(s) when mixing XFS with ReFS in a SOBR?
One thing to note is the block size for ReFS is recommended on Veeam to be 64K and XFS is 4K, so I believe that alone would be a problem if trying to mix in one SOBR.
Also keep in mind that Microsoft only supports Trim/UNMAP for ReFS on Storage Spaces. ReFS shouldn’t be used with SAN storage or you could get some really weird space accounting. Anton also calls this out in the R&D Forum.
I have installed a SOBR with three extents of XFS in our largest VCC datacenter. It works extremely well and have not had issues. We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.
We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.
Thanks Chris!
What was the problem(s) when mixing XFS with ReFS in a SOBR?
One thing to note is the block size for ReFS is recommended on Veeam to be 64K and XFS is 4K, so I believe that alone would be a problem if trying to mix in one SOBR.
Also keep in mind that Microsoft only supports Trim/UNMAP for ReFS on Storage Spaces. ReFS shouldn’t be used with SAN storage or you could get some really weird space accounting. Anton also calls this out in the R&D Forum.
From my perspective, Trim/UNMAP is not necessary for backup repositories. At least it does not hurt, if this feature is missing.
But I am not sure to don’t use ReFS on SAN Volumes is still true. I can remember, Microsoft added this limitations to their web-site when we faced massive performance problems within the first year after Windows 2016 and Veeam Block Cloning was available.
I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.
@jdw , thanks for your feedback!
Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full.
I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.
@jdw , thanks for your feedback!
Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full.
Do you have any good test results showing real world comparison? This test is on my to-do list for the blog, but not had the time yet; would be great to see some other comparison if they exist. My theory is that it will be far more important for low end storage but I’m not sure if it will have as much advantage on the QLC-flash systems I primarily work with.
I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.
@jdw , thanks for your feedback!
Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full.
Do you have any good test results showing real world comparison? This test is on my to-do list for the blog, but not had the time yet; would be great to see some other comparison if they exist. My theory is that it will be far more important for low end storage but I’m not sure if it will have as much advantage on the QLC-flash systems I primarily work with.
Block cloning at merging increments to fulls matters also for large installations. It depends on the environment of course, but I know about backups merging in minutes with block cloning comparted to hours without.