Skip to main content
Solved

Immutability on copy jobs - disk space requirements?


Sorry for the simple question, that’s no doubt been asked and answered before…

Today we have VCC for our off-site cloud backups, and the immutability is provided by the “Insider protection” which in short means we pay for the privilege but we do not need to worry about disk space consumption over and above our actual backups.

For lower cost we’re considering object storage, probably Wasabi, and we’ll want immutability there also. The thing I need to be sure of, is the disk space required. I am not sure if the copy jobs would have a specific backup mode, but with incremental, if they’re immutable, you’re previously overwritten backups are kept for longer? Disk space could be double what you have before?

Thanks in advance for any insight on real world disk space requirements. Today, 27TB approx. is our VCC repo and as I say, the immutable data is retained at a fixed additional cost so we don’t know the size of that.

Best answer by calabro

Word of caution consider not only the immutabiilty period, and retention but also the block generation especially for AWS object storage.  Depending on which cloud storage you are utilizing, this could considerable change your API charges.  

You can utilize the Veeam calculators and select the Block Generation.  There are additional API charges for immutability based on which cloud. (this is now tuneable to change the defaults in V12x)

Amazon (S3) recently changed their default from 10 to 30 days 

Most of cloud (Azure blob, etc) are set to 10 days

Please throughly read the Veeam Block Generation Help Article

I would also advise utilizing the cloud cost allocation tags to help understand and control costs.  

View original
Did this topic help you find an answer to your question?

4 comments

Chris.Childerhose
Forum|alt.badge.img+21

You can use the Veeam Calculator to see how much storage you require there is a Veeam one and Object First has one too -

https://www.veeam.com/calculators/simple/vbr/machines

https://objectfirst.com/backup-storage-calculator/

 


Dynamic
Forum|alt.badge.img+10
  • Veeam Vanguard
  • 386 comments
  • March 27, 2025

Some things to consider:

 


Scott
Forum|alt.badge.img+9
  • Veeam Legend
  • 1003 comments
  • March 29, 2025

Good call on the Minimum storage at Wasabi, and Chris linked some good info.

Always remember you will need the full chain. I believe the future is going to be by “Days” instead of “restore points” too. so keep that in mind. If you run a bunch of jobs manually, it could add to your restore points, but will still have to meet the immutable commitments.  

Always budget a bit of extra storage for anomalies or unexpected growth.  Other factors can change the incremental sizes as well such as a busy day on file servers, something modifying a bunch of data, even if someone pushes a ton of updates.  Try and get some averages of your daily, weekly, monthly growth/change rates. Once you have that see if you can find the day with the highest statistics as well as a reference.

VeeamONE is a great tool for monitoring this stuff. 

 

 


Forum|alt.badge.img
  • Veeam Vanguard
  • 6 comments
  • Answer
  • March 29, 2025

Word of caution consider not only the immutabiilty period, and retention but also the block generation especially for AWS object storage.  Depending on which cloud storage you are utilizing, this could considerable change your API charges.  

You can utilize the Veeam calculators and select the Block Generation.  There are additional API charges for immutability based on which cloud. (this is now tuneable to change the defaults in V12x)

Amazon (S3) recently changed their default from 10 to 30 days 

Most of cloud (Azure blob, etc) are set to 10 days

Please throughly read the Veeam Block Generation Help Article

I would also advise utilizing the cloud cost allocation tags to help understand and control costs.  


Comment