Skip to main content

When Every Terabyte Counts: How I Chose Between XFS/ReFS and S3 Object Lock for Backups

  • December 18, 2025
  • 1 comment
  • 11 views

Andanet
Forum|alt.badge.img+12

Intro

In a previous article I wrote about the type of storage a customer can use and write their backup to.

Some days ago, a user in the Veeam Community asked why, according to Veeam's calculator, S3 object storage is no longer more efficient in terms of space (with immutability and GFS) than storage using XFS/ReFS technology.

After answering, I decided to explore the topic further by putting myself in the shoes of a customer who has to make a choice in terms of the costs associated with the space used. That's why I want to tell you what happened with a customer.

When I start a new backup infrastructure project, there always comes a time when I have to decide what type of storage repository to use. When I submit a plan to the customer, I often hear them say: Why should I spend more and use more space than requested?

It usually happens when the storage budget is already tight, immutability is mandatory, and you are stuck in the middle between security, costs, and unrealistic expectations.

Situation: the problem was not only space

The problem is not just about space.

A client asked me to look into purchasing on-premises object storage of the same size as their current Hardened Repository, whose hardware was due to be retired because in EOL. This was in a mixed environment with around 100 virtual machines, strong data growth.

According to their idea, the sizing had to match because the backup chain and the data are the same.

After a series of checks and ascertaining that they were already achieving excellent optimisations with XFS, I pointed out that moving everything to S3 with Object Lock would incur additional costs in terms of space.

Furthermore, for the customer, “immutable = more secure” without considering that they had not linked that security to the actual cost per TB.

There was pressure to meet IT resilience requirements without exceeding the budget.

Task: make a technical decision… and make it acceptable

The task, as I understood it, was to come up with a plan for a repository that would bring all of these things together:

  • Strong immutability.
  • The best possible space efficiency.
  • Predictable costs over time.

Make sure the customer knows that S3 Object Lock isn't really "free" in terms of capacity and that XFS is still the best bet for efficiency.

So, the plan was basically to take a standard request – 'Put my backups on immutable S3' – and turn it into a solid architecture that everyone could get on board with.

Action: put numbers on the table and tell the right story

I turned the problem into a 'real backup chain'.

Instead of talking in abstract percentages, I took a typical workload:

About 20TB of source data, 3% daily change, 30 days of retention + GFS (weekly, monthly, yearly).

 

First, I got them to think about keeping everything on XFS/ReFS with Fast Clone. I asked how many physical TB you really need when every synthetic full is mostly metadata and pointers.
I then used the same logic for S3, adding:

  • Metadata and object overhead.

  • Immutability duration.

  • The effect of GFS on locked objects.

I wasn't sure I had the exact gigabyte size, but I tried to provide an honest estimate:

“If today, on XFS, something like ~20–22 TB is enough, on S3 with Object Lock for the same chain you should expect something around 24–26 TB.”

 

I showed two architectures, not just one.

First scenario: a hardened XFS repository on-premises.

  • Immutability at the filesystem level.
  • Fast Clone is great for maximising space.
  • It has the best TB/euro ratio, but has less ‘logical air gap’ than a public cloud.

Second scenario with an S3-type object repository with Object Lock.

  • Having immutability applied in an on-premises repository is great for compliance and destructive attacks.
  • It is the same logical backup chain, but has a higher overhead per TB.
  • It offers greater flexibility for off-site DR, but the storage budget is higher.

I made the discussion about costs more understandable.

So, after discussing with the customer and considering the terms of comparison based on the budget, it was decided that:
"A hardened XFS repository was the right solution."
As he also had a secondary site, he purchased a second server to use as a secondary repository.

 

 

Instead of simply saying ‘it costs 20% more’, I tried to relate the issue to the finance director by discussing it in a way he could understand.

I converted the extra TB into euros per year.

I compared it to the cost of a single serious incident where the immutability of the cloud can make a difference.

I explained how, in other projects, companies have accepted that delta simply to sleep more peacefully after suffering a ransomware attack.

The key was to link the numbers to real-life stories, rather than just using anonymous charts.

 

R – Result: a hybrid choice and… fewer conflicts

In the end, the outcome was not “XFS wins” or “S3 wins”, but something much more mature:

  • Primary tier: on‑prem hardened XFS repository for operational backups and fast restores, maximizing space efficiency.

  • Secondary tier: S3 with Object Lock for a longer protection window, with an optimized chain and different retention, consciously accepting extra space compared to on‑prem.

The main practical effects:

  • The security team got the “out‑of‑band” immutability they wanted.

  • IT kept performance and density where they matter most (on‑prem).

  • The CFO could tie the extra TB in S3 to a clear motivation, not to a “technical obsession”.

And maybe the most important part: any time someone now asks “why does S3 use a bit more space than XFS?”, the customer already has the answer internalized: it is not a bug in the backup product, it is the price of a different protection model.

1 comment

Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • December 18, 2025

Very interesting you went with S3 and not a combo.  Thanks for sharing. 👍