Experience with XFS



Show first post

38 comments

Userlevel 7
Badge +20

Would like to warm up an old topic here again 😁

Some time ago a had to delete some huge backup-file on different ReFS volumes. During the deletion the Windows server hung completely. Furthermore it needed quite some time to delete the file. Today I did some similar on a RHEL on a XFS volume. To keep it short: No server hang, duration was - according to my feeling - like the delete of un-pointed files.

Great work, XFS!

 

Yeah deleting on XFS even with block cloning seems way faster than wonderful Windows. 😋😂

I know Microsoft were bragging a lot about WS 2022 being up to 50x faster on ReFS operations such as this so it’d be good to see how that translates in the real world, I feel a heavy emphasis on “up to” is necessary!

Userlevel 7
Badge +13

I had tried to set up XFS as a Veeam repository and failed. No real help from the support either and I had to settle with NFS based repository. But this was with v10.

Thanks for the information @gulzarshaikhveeam. What did not work?

Userlevel 7
Badge +13

I have installed a SOBR with three extents of XFS in our largest VCC datacenter.  It works extremely well and have not had issues.  We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.

We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.

Thanks Chris!

What was the problem(s) when mixing XFS with ReFS in a SOBR?

Just that XFS and ReFS did not play well together so it is best to have them separate.  Been rock solid since.

Thanks @Chris.Childerhose ! So not mixing ReFS and XFS in a SOBR will be part of our best practices!

 

Userlevel 7
Badge +13

For those, interested in experiences with XFS, check this new post:

https://forums.veeam.com/veeam-backup-replication-f2/we-did-some-refs-vs-xfs-tests-t72011.html#p400824

Good quality and performance of XFS in real world deployment.

Userlevel 7
Badge +13

I have two XFS based repos each with a ~445TB extent. Excellent performance and stability. This is my preferred filesystem.

Thanks for your answer! It is good to know, big extents do not cause problems!

Userlevel 7
Badge +13

I have installed a SOBR with three extents of XFS in our largest VCC datacenter.  It works extremely well and have not had issues.  We did mix it with ReFS but that was a bad idea and created a new SOBR just for XFS alone and no mixing.

We also try to keep SOBRs at 2-3 extents maximum and then create new ones as needed.

Thanks Chris!

What was the problem(s) when mixing XFS with ReFS in a SOBR?

One thing to note is the block size for ReFS is recommended on Veeam to be 64K and XFS is 4K, so I believe that alone would be a problem if trying to mix in one SOBR.

Also keep in mind that Microsoft only supports Trim/UNMAP for ReFS on Storage Spaces. ReFS shouldn’t be used with SAN storage or you could get some really weird space accounting. Anton also calls this out in the R&D Forum. 

 

From my perspective, Trim/UNMAP is not necessary for backup repositories. At least it does not hurt, if this feature is missing.

But I am not sure to don’t use ReFS on SAN Volumes is still true. I can remember, Microsoft added this limitations to their web-site when we faced massive performance problems within the first year after Windows 2016 and Veeam Block Cloning was available. 

Userlevel 7
Badge +13

I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.

@jdw , thanks for your feedback!

Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full. 

Userlevel 4
Badge

I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.

@jdw , thanks for your feedback!

Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full. 

Do you have any good test results showing real world comparison? This test is on my to-do list for the blog, but not had the time yet; would be great to see some other comparison if they exist. My theory is that it will be far more important for low end storage but I’m not sure if it will have as much advantage on the QLC-flash systems I primarily work with.

Userlevel 7
Badge +13

I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.

@jdw , thanks for your feedback!

Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full. 

Do you have any good test results showing real world comparison? This test is on my to-do list for the blog, but not had the time yet; would be great to see some other comparison if they exist. My theory is that it will be far more important for low end storage but I’m not sure if it will have as much advantage on the QLC-flash systems I primarily work with.

Block cloning at merging increments to fulls matters also for large installations. It depends on the environment of course, but I know about backups merging in minutes with block cloning comparted to hours without.

Userlevel 7
Badge +20

Most definitely you can mix them to migrate I would assume.  Just we had issues using them together.  Adding XFS then sealing the ReFS would be the best solution as noted for sure.

Userlevel 7
Badge +22

I’ve done both XFS an EXT4 with Hardened Linux Repo, both worked fine. If you’re using a physical repo server with JBOD storage I would probably lean toward XFS so you can get the block clone features, especially since you have to use Forward Incremental with periodic fulls with HLR. I was using SAN storage with really good built in native dedup, so in that case I decided to stick with EXT4 since I didn’t really need the block clone benefits.

@jdw , thanks for your feedback!

Keep in mind, Block Clone also saves time (because it reduces needed IOs dramatically) when merging last increment into last full. 

Do you have any good test results showing real world comparison? This test is on my to-do list for the blog, but not had the time yet; would be great to see some other comparison if they exist. My theory is that it will be far more important for low end storage but I’m not sure if it will have as much advantage on the QLC-flash systems I primarily work with.

Block cloning at merging increments to fulls matters also for large installations. It depends on the environment of course, but I know about backups merging in minutes with block cloning comparted to hours without.

Yes this is for certain. One fun way to find out is to accidently add an NTFS rep to your REFS SOBR :). Then when a new backup chooses the NTFS since there is more space you can watch as it takes at least 10 times longer to perform synthetic operations :). This was done once by accident so I witnessed it personally.

Userlevel 7
Badge +17

Would like to warm up an old topic here again 😁

Some time ago a had to delete some huge backup-file on different ReFS volumes. During the deletion the Windows server hung completely. Furthermore it needed quite some time to delete the file. Today I did some similar on a RHEL on a XFS volume. To keep it short: No server hang, duration was - according to my feeling - like the delete of un-pointed files.

Great work, XFS!

 

Ok, had no problems with this up to now. It takes some time to delete a big file with many pointers. But I did not have a server hang.

A compare between ReFS and XFS would be interesting… Maybe this summer 😎

Userlevel 7
Badge +13

Hi @StefanZi - this issues we saw was when a job ran if it was on XFS to start and then for some reason decided to write over to ReFS it seemed to cause major issues on the VCC server and repos.  I cannot explain it fully but when we separated the XFS from ReFS everything seems to work great at that point.

I know the documentation says you can mix them but from the people I spoke with at Veeam it was recommended not to.  So that is the approach we have been taking lately.

XFS has been great and looking forward to the Immutable storage now on it as well as the Linux Proxies. :sunglasses:

Chris, thanks for your details!

So I think it should be safe to migrate ReFS to XFS in the way @StefanZi lined out.

Comment