Block size in combination with Veeam Cloud Connect and Object Storage


Userlevel 7
Badge +11

Hi all

I already read and know different articles and posts of different people like @MicoolPaul, @olivier.rossi, @Chris.Arceneaux and others regarding object storage and the used block size.

In this article a bit more about that in combination with Veeam Cloud Connect.

 

Goal : 

My goal is to setup a new Veeam Cloud Connect infrastructure for multiple hundreds of customers.

 

Setup : 

The setup of the Veeam Cloud Connect is a infrastructure that consists of several SOBR’s that consists of multiple extents (performant block-storage physical servers with REFS - 64K) in a performance tier in 1 datacenter and a capacity tier with object storage (performant physical object storage devices) in another datacenter with using the copy-mode and immutability.

Of course there is a performant interconnection between the 2 datacenters.

 

Advantages of this setup : 

There are a lot of advantages in my opinion in using that setup : 

  • we use 2 datacenters and all data is being copied from 1 datacenter to another datacenter
  • if one performance extent falls out - no problem all restore points are still available to restore from the customer using the corresponding restore point at the object storage
  • using the copy-mode to object storage with immutability, it seems to the tenant that all their restore points are immutable because not aware that the SP is using that setup
  • very performant because very performant block storage is used as the primary cloud storage
  • when there is an issue with the object storage, that has no direct impact to the customer
  • there are no egress costs and API costs because using on-premise (in a datacenter) own object storage devices
  • very scalable block-storage : just putting extra extents if needed
  • very scalable object-storage : just putting extra object storage devices if needed
  • ...

 

Recommendations : 

A recommendation from almost all object storage vendors is to use 4MB or even 8MB block size instead of the default recommended 1MB block size.

Why is that?

That’s because the throughput and performance is higher with larger block sizes because of having less objects instead of using smaller block size that results in much higher number of objects on the object storage.

 

Problem :

The problem is that you can’t set the block size on a copy-job or offload-job (SOBR towards object storage), …

The only location where you can set the block size is on a primary backup job. All secondary copy-jobs or offload-jobs (SOBR) will use that block size for that particular VM/instance that is being used on the primary backup job.

In case of Veeam Cloud Connect, almost every tenant is using on-premise primary backups and a copy to the Veeam Cloud Connect to have a offsite and immutable copy.

OK, fine : then we set the block size of the primary backup-job to 4MB instead of the recommended (by Veeam) 1MB block size and run an active full so the 4MB block size is active on the primary repository.

 

Result : 

I have done several tests with several kinds of contents and it is each time more or less the same result.

The full backup is more or less the size regarding the size at the primary repository and of course also on the Veeam Cloud Connect performance tier.

BUT, as expected the incremental backups are much bigger in size (as expected) when using 4MB block size instead of 1MB block size!

I knew that this would result in higher incremental backups, but the difference is much higher than expected. I even had incremental backups that were more than 200% bigger in size !!!

An overview : 

The job consists of 4 VMs (a Citrix server, a domain controller, a management server and a SQL server)

 

  • 1MB : full is 320GB and incremental is 16GB
  • 4MB : full is 320GB and incremental is 42GB !!!

 

Result : 

Therefore it’s in my opinion impossible and not recommended to use 4MB block size in combination with Veeam Cloud Connect that is using object storage as a capacity tier.

And IMHO also in other cases with using object storage as a secondary repository.

This because : 

  • it consumes much more storage at the customer site (primary on-premise repository)
  • much more data needs to be transferred over the WAN (in Belgium the upload bandwidth is often a bottleneck)
  • it consumes much more storage at the SP on the performance tier
  • because of using much more storage at the SP, the costs are higher for the customer because of using more cloud storage
  • more data needs to be transferred to the capacity tier

All those disadvantages in result to have more performant object-storage and a less number of objects...

 

Ideal solution - feature request : 

Therefore it should be ideal that the block size could be changed on-the-fly…

So you could set a different block size on a copy job or offload job (SOBR) than is being used at the primary job.

Now the last chain (that is often object storage) depends totally of the block size on the primary size.

Also as a SP, you can’t control the block size that is going to the Veeam Cloud Connect infrastructure.

So I think there is some room for improvement.

This especially because all big object storage vendors are really recommending setting the block size on 4MB or higher.

Looking forward to some improvement in that.

 

Feedback :

What are your experiences with those topics?

Feel free to reply.

 

regards

Nico


14 comments

Userlevel 7
Badge +20

We have had to move to larger blocks for Object as smaller ones don't work as well. Great post Nico. 👍

Userlevel 7
Badge +11

We have had to move to larger blocks for Object as smaller ones don't work as well. Great post Nico. 👍

Thx for your feedback @Chris.Childerhose. Also in combination with VCC?

Userlevel 7
Badge +20

We have had to move to larger blocks for Object as smaller ones don't work as well. Great post Nico. 👍

Thx for your feedback @Chris.Childerhose. Also in combination with VCC?

Yeah any Veeam to Object we find.

Userlevel 7
Badge +7

Hello @Nico Losschaert  thanks for all this informations.

I m currently working on a new VCC infrastructure. It’s good to know.

Do you notice a big difference in the performance ?

  • Write troughput
  • Restore Time
  • Deletion Time
Userlevel 4
Badge +1

It heavily depends on which object storage you are using, it’s not a common rule for everyone. Indeed, some suffer the IO load we trow at them, and increasing the block size is an easy solution. But at expense of highly increased incremental size. @HannesK did some tests and 2x is the value we also found.

It makes total sense that a full backup has the same size, if you think about it the entire vm has always the same size, regardless if you cut it into 1mb pieces or 4mb pieces. But when it comes to incremental, a changed byte can mark an entire block as changed, so a larger block may have more chances to have changed bytes.

Some other worked by introducing trottling, either in the storage or by placing reverse proxies in front of the storage (like haproxy or others). The idea obviously is to slow down the backup process to let the storage be able to ingest data. This choice will affect the SLA, as you’re now offloading data at a slow pace.


Some providers instead went to test and select vendors that have no problem at all in ingesting our data. But as you can imagine if we need to go into these details I’d take the conversation offline.

BTW: if you go for Direct2Object IO load is even higher, as writes can now be random as in the block storage they are replacing. Capacity tier is somewhat “safe” as many writes are sequentials and so a little bit less stressful than D2O.

Userlevel 7
Badge +11

It heavily depends on which object storage you are using, it’s not a common rule for everyone. Indeed, some suffer the IO load we trow at them, and increasing the block size is an easy solution. But at expense of highly increased incremental size. @HannesK did some tests and 2x is the value we also found.

It makes total sense that a full backup has the same size, if you think about it the entire vm has always the same size, regardless if you cut it into 1mb pieces or 4mb pieces. But when it comes to incremental, a changed byte can mark an entire block as changed, so a larger block may have more chances to have changed bytes.

Some other worked by introducing trottling, either in the storage or by placing reverse proxies in front of the storage (like haproxy or others). The idea obviously is to slow down the backup process to let the storage be able to ingest data. This choice will affect the SLA, as you’re now offloading data at a slow pace.


Some providers instead went to test and select vendors that have no problem at all in ingesting our data. But as you can imagine if we need to go into these details I’d take the conversation offline.

BTW: if you go for Direct2Object IO load is even higher, as writes can now be random as in the block storage they are replacing. Capacity tier is somewhat “safe” as many writes are sequentials and so a little bit less stressful than D2O.

Thx a lot for your feedback @LDelloca ! Much appreciated and especially by you 🤗.

The 4MB choice seems to be definitely the best choice towards the object storage, because 1MB delivers +/- 512KB objects and 4MB delivers +/- 2MB objects. Because the object storage vendor is handeling packages smaller than 1MB differently (mirrored) than packages larger than 1MB (erasure coding) the way to go is having packages larger than 1MB. 

Next to that the number of packages will differ significantly, the performance and reliability will be better with block size of 4MB.

Unfortunately, we have to set this al at the beginning 😥. So more storage consumed at the customer, more storage over then WAN, more storage consumed on the performance tier … that’s unfortunate.

 

But I believe in this setup.


Thx again for your feedback.

Userlevel 7
Badge +11

Hello @Nico Losschaert  thanks for all this informations.

I m currently working on a new VCC infrastructure. It’s good to know.

Do you notice a big difference in the performance ?

  • Write troughput
  • Restore Time
  • Deletion Time

What do you mean @Stabz in having a big difference in performance?

Having 4MB blocks vs 1MB blocks?

I will test the difference. It probably depends also on the whole setup and choice of vendors, especially the one of the object storage.

Userlevel 7
Badge +6

Keep in mind what other data you may have on your Object Storage. Eg: If you have other (non Veeam) data that is larger blocks in general, your object storage may be able to handle the smaller blocks that Veeam creates, deep in their nested folders, easier than if its all Veeam data using 1 MB blocks. 

 

As well, if your Object Storage is all SSD or NVMe, it might not matter at all. That said, I can’t imagine anyone deploying Object Storage as a backup landing zone using anything other than HDDs, just due to pure cost. Scaling up into PiB of data using SSDs is no small expense.

 

Scale of your Object Storage deployment can also have an impact to your overall performance. A million objects spanned over 8 HDDs is going to hurt. A million objects spanned over 800 HDDs is not even a concern. However, now we’re back to the first point I made - if all the data on your Object Storage is Veeam data, scale isn’t going to necessarily help you here because you’re no longer talking about a million objects. You’re talking billions if you use 1 MB blocks. As an example, a 53 TB bucket of Veeam data using 4 MB Blocks consumes 35.8 Million objects.  A 71.2 TB bucket of Veeam data using 1MB blocks contains 149 Million objects. These are real world numbers from our Object Storage. Extrapolate those numbers out for an Object Storage cluster containing PiB of data. That’s a *lot* of objects. Want to know why @Chris.Childerhose uses 4 MB Blocks? Now you know. Can you control both sides is a great question (Eg: do you manage your customer’s on premises Veeam settings), so you can set these settings?

 

I suspect the hyper-scalers handle this better than on premises Object Storage simply due to their scale. They can have thousands of HDDs in their Object Storage, consuming all sorts of data (not just Veeam) allowing them to spread the small object load across large swaths of HDDs. Some On Premises solutions may have a significantly sized cache tier just to help mitigate the effects of small objects.

 

Now, back to performance. Sure, reads/writes might be slower with 1 MB objects, but is that even *really* a problem (provided, obviously, that your object storage doesn’t fall over completely handling the ingress/egress data) and you can ingest the data fast enough to meet the customer backup job schedule?

If a customer is going to Object Storage hosted by some other vendor somewhere off in the internet, its hopefully not be their primary backup (3-2-1 anyone? Anyone? Bueller?). Generally offsite backups are not expected to have short RPOs, because there are too many variables. Internet speed, backup sizes, offsite vendor of choice, etc. If your primary backups are toast, you’re generally pretty happy just to have offsite *somewhere*. That said, there are always going to be those customers that want near zero RPOs and RTOs (that is, until they see the costs of doing so, but that’s another conversation).

All that said, that’s not specifically my point. Of course you need your Object Storage to be performant *enough* to handle ingesting data from many customers, and performant enough to handle restores. The other concern (and arguably the bigger concern) you’ll have is: What do you do when you need to rebuild that data? You lose a HDD, now what? How fast can that replacement HDD be rebuilt from your other drives in your erasure coded set? You did plan out a sizeable erasure coded set, right? Small objects can kill you here too - each object is a file on that underlying file system. Rebuild times can be quite significant. Plan accordingly.


Now, I’m not saying be scared of Object Storage with your Veeam deployments. Only just be aware it has its own set of challenges. Some Object Storage vendors may handle things better than others, but the challenges are usually pretty ubiquitous across the vendors. Some just handle those challenges differently than others. 

Userlevel 4
Badge +1

Great comment Tyler, thanks.
I can add two notes here:
 

  • hyperscalers do it better: I’m not so sure, they seems to be able to sustain bigger loads indeed cause the underlying system has many more disks and nodes, but also cause they apply tons of throttling rules on the network. We played a bit at a provider with an on-prem s3 and haproxy cluster in front of it, and we noticed this clearly. When you start to apply throttling the storage has an easier life. But as I said before, at the expense of backup times;
  • HDD only: that’s correct if it’s a capacity tier for SOBR, but for primary backups I’ve started to see many high-end solutions (name the usual suspects) with hybrid, full-flash or even NVME. Yes, as a backup target. The price justification heavily depends on which business you are running, so what sounds unjustifiable for your service, may be totally fine for another. For the topic of this thread, VCC, I agree it may be an overkill.
Userlevel 7
Badge +11

Keep in mind what other data you may have on your Object Storage. Eg: If you have other (non Veeam) data that is larger blocks in general, your object storage may be able to handle the smaller blocks that Veeam creates, deep in their nested folders, easier than if its all Veeam data using 1 MB blocks. 

 

As well, if your Object Storage is all SSD or NVMe, it might not matter at all. That said, I can’t imagine anyone deploying Object Storage as a backup landing zone using anything other than HDDs, just due to pure cost. Scaling up into PiB of data using SSDs is no small expense.

 

Scale of your Object Storage deployment can also have an impact to your overall performance. A million objects spanned over 8 HDDs is going to hurt. A million objects spanned over 800 HDDs is not even a concern. However, now we’re back to the first point I made - if all the data on your Object Storage is Veeam data, scale isn’t going to necessarily help you here because you’re no longer talking about a million objects. You’re talking billions if you use 1 MB blocks. As an example, a 53 TB bucket of Veeam data using 4 MB Blocks consumes 35.8 Million objects.  A 71.2 TB bucket of Veeam data using 1MB blocks contains 149 Million objects. These are real world numbers from our Object Storage. Extrapolate those numbers out for an Object Storage cluster containing PiB of data. That’s a *lot* of objects. Want to know why @Chris.Childerhose uses 4 MB Blocks? Now you know. Can you control both sides is a great question (Eg: do you manage your customer’s on premises Veeam settings), so you can set these settings?

 

I suspect the hyper-scalers handle this better than on premises Object Storage simply due to their scale. They can have thousands of HDDs in their Object Storage, consuming all sorts of data (not just Veeam) allowing them to spread the small object load across large swaths of HDDs. Some On Premises solutions may have a significantly sized cache tier just to help mitigate the effects of small objects.

 

Now, back to performance. Sure, reads/writes might be slower with 1 MB objects, but is that even *really* a problem (provided, obviously, that your object storage doesn’t fall over completely handling the ingress/egress data) and you can ingest the data fast enough to meet the customer backup job schedule?

If a customer is going to Object Storage hosted by some other vendor somewhere off in the internet, its hopefully not be their primary backup (3-2-1 anyone? Anyone? Bueller?). Generally offsite backups are not expected to have short RPOs, because there are too many variables. Internet speed, backup sizes, offsite vendor of choice, etc. If your primary backups are toast, you’re generally pretty happy just to have offsite *somewhere*. That said, there are always going to be those customers that want near zero RPOs and RTOs (that is, until they see the costs of doing so, but that’s another conversation).

All that said, that’s not specifically my point. Of course you need your Object Storage to be performant *enough* to handle ingesting data from many customers, and performant enough to handle restores. The other concern (and arguably the bigger concern) you’ll have is: What do you do when you need to rebuild that data? You lose a HDD, now what? How fast can that replacement HDD be rebuilt from your other drives in your erasure coded set? You did plan out a sizeable erasure coded set, right? Small objects can kill you here too - each object is a file on that underlying file system. Rebuild times can be quite significant. Plan accordingly.


Now, I’m not saying be scared of Object Storage with your Veeam deployments. Only just be aware it has its own set of challenges. Some Object Storage vendors may handle things better than others, but the challenges are usually pretty ubiquitous across the vendors. Some just handle those challenges differently than others. 

Thx a lot for this feedback @TylerJurgens ! What a ton of information, love it 🤣. We will be using object storage as a capacity tier and of course with regular HDD’s. There will be almost 100 HDD spindles, so not that bad. We will see the outcome of the performance when data will be pushed to the capacity tier. It’s difficult to know in advance, also because it’s the first time we will be using on-premise object storage. Performant servers with block storage has proven itself, it’s very good. Therefore I believe in this setup. Another advantage of our object storage solution is that are no limits regarding sizes of 1 bucket because of not using an underlying file-system. The solution can be easily scaled out if needed. But with a lot things : we have the theory and the practice 🤗

Userlevel 7
Badge +11

Great comment Tyler, thanks.
I can add two notes here:
 

  • hyperscalers do it better: I’m not so sure, they seems to be able to sustain bigger loads indeed cause the underlying system has many more disks and nodes, but also cause they apply tons of throttling rules on the network. We played a bit at a provider with an on-prem s3 and haproxy cluster in front of it, and we noticed this clearly. When you start to apply throttling the storage has an easier life. But as I said before, at the expense of backup times;
  • HDD only: that’s correct if it’s a capacity tier for SOBR, but for primary backups I’ve started to see many high-end solutions (name the usual suspects) with hybrid, full-flash or even NVME. Yes, as a backup target. The price justification heavily depends on which business you are running, so what sounds unjustifiable for your service, may be totally fine for another. For the topic of this thread, VCC, I agree it may be an overkill.

Thx again @LDelloca for your second feedback on this. Really nice getting feedback from one the most famous Veeam architects regarding Veeam Cloud Connect 🤗. As mentioned in my former reply, we will be using HDDs without underlying file-system. Should normally be fast enough as being a capacity tier. The biggest advantage of a on-premise solution against a hyperscaler solution : the control of it, the proximity, fixed investment in my opinion.

Userlevel 7
Badge +6

Setup : 

The setup of the Veeam Cloud Connect is a infrastructure that consists of several SOBR’s that consists of multiple extents (performant block-storage physical servers with REFS - 64K) in a performance tier in 1 datacenter and a capacity tier with object storage (performant physical object storage devices) in another datacenter with using the copy-mode and immutability.

Of course there is a performant interconnection between the 2 datacenters.

 

Also, out of curiosity, why ReFS instead of Linux w/ XFS? I found XFS *far* more stable than ReFS and easier to recover from when challenges arise. Additionally, Microsoft has strict hardware requirements when using any RAID controller and ReFS - deviating from that can cause you to have data loss/corruption (ask me how I know next time we’re together in Prague). 

Userlevel 7
Badge +11

Setup : 

The setup of the Veeam Cloud Connect is a infrastructure that consists of several SOBR’s that consists of multiple extents (performant block-storage physical servers with REFS - 64K) in a performance tier in 1 datacenter and a capacity tier with object storage (performant physical object storage devices) in another datacenter with using the copy-mode and immutability.

Of course there is a performant interconnection between the 2 datacenters.

 

Also, out of curiosity, why ReFS instead of Linux w/ XFS? I found XFS *far* more stable than ReFS and easier to recover from when challenges arise. Additionally, Microsoft has strict hardware requirements when using any RAID controller and ReFS - deviating from that can cause you to have data loss/corruption (ask me how I know next time we’re together in Prague). 

Hey @TylerJurgens , because we are not having Linux in our portfolio and the engineers have too limited knowledge of Linux, so it’s a very bad idea to use XFS if the knowledge is too little when things will happen...

Userlevel 7
Badge +22

The Block size as Nico points out has a big effect on backup size. 

It heavily depends on which object storage you are using, it’s not a common rule for everyone. Indeed, some suffer the IO load we trow at them, and increasing the block size is an easy solution. But at expense of highly increased incremental size. @HannesK did some tests and 2x is the value we also found.

It makes total sense that a full backup has the same size, if you think about it the entire vm has always the same size, regardless if you cut it into 1mb pieces or 4mb pieces. But when it comes to incremental, a changed byte can mark an entire block as changed, so a larger block may have more chances to have changed bytes.

Some other worked by introducing trottling, either in the storage or by placing reverse proxies in front of the storage (like haproxy or others). The idea obviously is to slow down the backup process to let the storage be able to ingest data. This choice will affect the SLA, as you’re now offloading data at a slow pace.


Some providers instead went to test and select vendors that have no problem at all in ingesting our data. But as you can imagine if we need to go into these details I’d take the conversation offline.

BTW: if you go for Direct2Object IO load is even higher, as writes can now be random as in the block storage they are replacing. Capacity tier is somewhat “safe” as many writes are sequentials and so a little bit less stressful than D2O.

Luca is spot on here. Not all vendors recommend that 😀. I mean you should not have to increase your backup size 4 to 8 times in order to get things to work well. 

Comment