Skip to main content
Question

Veeam 12.3 ReFS deduplication together with job deduplication?

  • November 2, 2025
  • 4 comments
  • 44 views

Hi,

I’m new to Veeam family and doing initial deployment.

What is recommendation for backup jobs - one job for one VM? or group more VMs which could be same schema of backup in one backup job? how much it affects deduplication?

Is ReFS deduplication compatible with also enabled job deduplication? Could be both enabled?

Is there any report of job deduplication? for me it looks it does not deduplicate anything.

Thanks

4 comments

waqasali
Forum|alt.badge.img+4
  • Influencer
  • 405 comments
  • November 3, 2025

Group VMs with similar backup needs in one job it improves deduplication and simplifies management.
ReFS block cloning works with Veeam’s job-level deduplication, and both can be enabled.
Deduplication stats are visible in job logs or via Veeam ONE.


regnor
Forum|alt.badge.img+14
  • Veeam MVP
  • 1387 comments
  • November 3, 2025

Hi ​@Tibor!

Veeam does compression and deduplication per VM backup chain. So from that point of view it doesn’t matter how you group your VM in your backup jobs.

In general I would try to keep it simple and not configure too many jobs. Group VMs according to their retention or backup schedule, and keep the jobs dynamic.

While ReFS deduplication is possible, it’s not recommended because of it’s performance implications. With Fast Clone and Veeam’s own compression/deduplication there’s also no high need to have global deduplication enabled.


Tommy O'Shea
Forum|alt.badge.img+5
  • Veeam Legend
  • 359 comments
  • November 3, 2025

I wouldn’t recommend doing multiple layers of deduplication. Sure, you make eek out slightly more storage savings, however, it may slow down both your backups and restores significantly.

In fact, there are also some settings available in the storage tab of backup jobs that optimize for deduplication targets.


Forum|alt.badge.img+3
  • Comes here often
  • 192 comments
  • November 5, 2025

  

@Tibor it’s best to clarify if you’re talking about ReFS + Windows Deduplication in conjunction with Veeam compression / dedup or if you’re talking about ReFS Fast Clone + Veeam Compression and Dedup

I will drop what i consider law:

Don’t use ReFS + Windows Deduplication

You can do it, but from my time in Veeam Support, I’ve seen this break fast clone so fast and also we saw quite a few dataloss situations where the data was removed from the ReFS namespace due to issues, neverminding extreme performance issues.

Fast Clone savings are separate from Veeam Dedup/Compression savings -- fast clone saves space by making a reference to an existing block on the volume instead of copying it should two files need the same data. Check the examples in this article from Microsoft i find it explains it well. The space savings comes from files sharing same blocks.

Veeam Dedup / compression works on the source data, so clearing out whitespace / deleted items and deduplicating similar data blocks from the production VM, meaning the backup file itself is smaller.

See the difference on where ReFS and Veeam Dedup / Compression work?

(Side note: our reference architecture now is XFS instead of ReFS -- if you’re not experienced with Linux, v13 has the Veeam Infrastructure Appliance which requires 0 linux experience to deploy and manage)

 

So in short:

  1. Fast Clone + Veeam Dedup / Compression are fine
  2. XFS is preferred to ReFS
  3. Never run Windows deduplicatoin with ReFS