Discrepancy between used space in a bucket and Veeam?

  • 22 February 2024

Hello all, 

I have been looking in the forum but I have not found information. In my job we are using Nutanix objects, one of the  buckets shows  450 TB of usage space on Nutanix, but when I review on veeam the backup software it only shows  90 TB. Is there an issue with this space not being reclaimed?, on a  previous post, I found ReFS takes some time to free up space and show the proper amount.

I hope you can shed some light on. 


Best answer by MicoolPaul 22 February 2024, 01:43

View original


Userlevel 6
Badge +3

Nutanix uses a curator process scan to delete objects see link below for more details. This can take a while based on the schedule scans,

This KB gives you a good explanation -

If you have any issues I would contact Nutanix support to see if there are any corruption issues😊

Userlevel 7
Badge +20



Two things come to mind here:

  1. Veeam is reporting on the size of all the objects it has written, whereas Nutanix might be showing you the total raw consumption of the multiple copies of objects across the nodes for redundancy
  2. You might have orphaned data that Veeam is no longer using but Nutanix didn’t delete, such as due to overwhelming the S3 endpoint.

This is speculation as I’ve not seen your environment, nor do I have a Nutanix objects setup available to test, but it’s common things with object storage I’m aware of

Userlevel 7
Badge +21

There are a few things like deduplication and block sizing within your job. Veeam does a good job when sending backups to object.

Explanation here -