Skip to main content

Hi 

 

I am looking for some guidance for the backup data export sizing with Kasten. Let’s say, my case has 10TB production data, daily backup with 30 restore points, 5% delta change daily. I assume the compression and deduplication will save up 50%. Does it mean that I need to have at least 10TBx30x50% =150TB for my backup repository ?

I do see in a lab that the second backup is much smaller than the first backup. Is there any incremental backup / global deduplication in the backup data export behind the scene ? 

 

Chris

Kasten makes definitive incremental backups. You will not need space for 30 full backups in your case.

 


Kasten makes definitive incremental backups. You will not need space for 30 full backups in your case.

 

Definitely it does. Hopefully the experts chime in too as I have not used it much just getting in to it. Paging @Geoff Burke


Hi 

 

I am looking for some guidance for the backup data export sizing with Kasten. Let’s say, my case has 10TB production data, daily backup with 30 restore points, 5% delta change daily. I assume the compression and deduplication will save up 50%. Does it mean that I need to have at least 10TBx30x50% =150TB for my backup repository ?

I do see in a lab that the second backup is much smaller than the first backup. Is there any incremental backup / global deduplication in the backup data export behind the scene ? 

 

Chris

Hi Chris,

 

So as everyone has said Kasten will leverage incremental snapshots and then there is a certain level of deduplication and compressions as well with the offload. The offloads can then be sent to S3 or NFS. A lot of course will depend on your workloads and rate of change. You can find more info in the Kasten documentation https://docs.kasten.io/latest/usage/protect.html


Comment