Solved

Discrepancy between used space in a bucket and Veeam?

  • 22 February 2024
  • 3 comments
  • 40 views

Hello all, 

I have been looking in the forum but I have not found information. In my job we are using Nutanix objects, one of the  buckets shows  450 TB of usage space on Nutanix, but when I review on veeam the backup software it only shows  90 TB. Is there an issue with this space not being reclaimed?, on a  previous post, I found ReFS takes some time to free up space and show the proper amount.

I hope you can shed some light on. 

icon

Best answer by MicoolPaul 22 February 2024, 01:43

View original

3 comments

Userlevel 7
Badge +20

There are a few things like deduplication and block sizing within your job. Veeam does a good job when sending backups to object.

Explanation here - https://helpcenter.veeam.com/docs/backup/hyperv/compression_deduplication.html?ver=120

 

Userlevel 7
Badge +20

Hi,

 

Two things come to mind here:

  1. Veeam is reporting on the size of all the objects it has written, whereas Nutanix might be showing you the total raw consumption of the multiple copies of objects across the nodes for redundancy
  2. You might have orphaned data that Veeam is no longer using but Nutanix didn’t delete, such as due to overwhelming the S3 endpoint.

This is speculation as I’ve not seen your environment, nor do I have a Nutanix objects setup available to test, but it’s common things with object storage I’m aware of

Userlevel 6
Badge +3

Nutanix uses a curator process scan to delete objects see link below for more details. This can take a while based on the schedule scans,

https://next.nutanix.com/how-it-works-22/curator-scans-types-and-frequency-or-where-is-our-free-space-33516

This KB gives you a good explanation - https://portal.nutanix.com/page/documents/kbs/details?targetId=kA03200000098CQCAY

If you have any issues I would contact Nutanix support to see if there are any corruption issues😊

Comment