Skip to main content

Hello,

In terms of cost (price): there is some difference between Direct storage and scale out repo ? (Amazon S3)

Is it possible that the writes will be smaller  with scale out? Brcause the backup is already created ?

 

Thanks 

Hi,

I want to state an assumption up front with this, I’m assuming you’re referring to scale out repo as capacity tier instead of performance tier within a SOBR as that would be direct to object then anyway.

There isn’t anything I’ve seen to indicate that Veeam would create smaller backups, as you still need to protect all the blocks of data. And Veeam does the same task of creating per block blobs within the object storage repository, any efficiencies I would expect to be based on backup frequency differences in such a scenario. The one place I wonder if you would see cost efficiencies (and I’m speculating here) would be the API calls needed to perform verification tasks etc when using direct to object vs capacity tier, thinking of things such as health checks being run against performance tier instead of capacity tier and other tasks such as SureBackup.

 

Welcome any other view points on this 🙂


Hi,

I want to state an assumption up front with this, I’m assuming you’re referring to scale out repo as capacity tier instead of performance tier within a SOBR as that would be direct to object then anyway.

There isn’t anything I’ve seen to indicate that Veeam would create smaller backups, as you still need to protect all the blocks of data. And Veeam does the same task of creating per block blobs within the object storage repository, any efficiencies I would expect to be based on backup frequency differences in such a scenario. The one place I wonder if you would see cost efficiencies (and I’m speculating here) would be the API calls needed to perform verification tasks etc when using direct to object vs capacity tier, thinking of things such as health checks being run against performance tier instead of capacity tier and other tasks such as SureBackup.

 

Welcome any other view points on this 🙂

Thanks for the Answer

I was referring to SOBR : first in performance (local) and then move it to capacity (s3) My thought is if doing it locally first and then moving the backup to S3 will result in lower costs than Direct object 


Hi,

 

If you’re using the same block size and intending to copy all backups to capacity tier instead of moving only GFS for example, then no it won’t result in lower storage costs. As I mentioned, lower API costs potentials are there however, for example if you need to restore a backup from capacity tier but you’ve got most of the blocks locally, then only the blocks that aren’t stored locally will be pulled from capacity tier, and things such as SureBackup can be run from your performance tier instead, but otherwise it’s the same.


Hello @Srt93 

If you are asking about direct backup to Object storage vs SOBR (Local disk + S3), there are some points needs to be clarified:

  • Direct backup to Object doesn’t support plugin backups.
  • Regarding cost or used space, Object storage is better than SAN storage.
  • Backup size will be quite the same.
  • I prefer SOBR (Local disks as performance tier + S3 as Capacity tier) for fast backup and restore window.

thanks to you 

Everything is clear to me!


I will note that block size does matter.  Myself and a couple others did some testing a while back of how much space is consumed with backing up to Wasabi or other object storage and utilize different block sizes.  That WILL make a difference on how much space is consumed, but that’s not a change due to object vs block storage, SOBR vs Direct to object, etc.  It just matters how large of blocks you’re using in that case.  But it does sounds like that wasn’t the question, but I wanted to put it out there that block size does matter.


Comment