Skip to main content

For this post I planed to share some time and bandwidth comparison for backups in my lab. But it is some kind of difficult to get representative data in a fully virtualized demo environment.

So I am very happy @Gostev pointed out this topic in his last Words from Gostev (Veeam R&D Forums Digest) from 8.2.2021. If you are not subscribed to this newsletter: do so! https://forums.veeam.com/

I want to summarize important facts here:

  • Backup performance per appliance (all-in-one box) doubles in v11.
  • Revise and optimize basics:
    • How are backup files written to disk.
    • Shared memory transport engine, when source- and target-datamover on same box.
    • Implemented full NUMA awareness
    • Enhance the data movers placement logic to ensure they never end up on different CPUs.

All this, and I assume much more, leads to an incredible performance. Veeam tested with an HPE Apollo 4510 Gen 10 Server – which is really a perfect backup target – as a all-in-one appliance. Veeam was able to peak at 11.4 GB/s (!) backup speed, with source still as bottleneck (!).

Detailed configuration:

  • HPE Apollo 4510 Gen10
  • 2x Intel Xeon Gold 6252 CPU @ 2.1GHz (24 cores each)
  • 12x 16GB DIMM (192GB RAM total)
  • 2x 32Gb/s FC; 2x 40GbE LAN
  • 58x 16TB SAS 12G HDD
  • 2x HPE Smart Array P408i-p Gen10
  • 2x RAID-60 with 128KB strip size on 2x (12+2) + 2 hot spares (575TB usable)
  • Windows Server 2019 with ReFS 64KB cluster size
  • 45 VMs with a total of 5.5TB used space + backup encryption enabled + per-VM backup chains enabled

personal Notes

  • I think, Veeam uses 4 Raid-60 over all. Two arrays per Smart controller. This increases disk-performance for sure. But at the cost of space, keep this in mind.
  • It would be interesting to know how data enters the appliance. For reaching 11GB/Sec, I guess FC and LAN is used in parallel.
  • It would also be interesting to know what the source storage was. :thinking:
  • I am sure, performance improvements will also be observable in distributed environments.
  • Well designed Servers of other vendors are of course able to get this performance level too!

 

Awesome share @vNote42 , my speculations for Hpe Apolo for backup repo was not senseless :sunglasses:


I’m really interessted to have the same comparison with xfs :D

 

Sorry for the double post


I’m really interessted to have the same comparison with xfs :D

 

Sorry for the double post

Agree, would be very interesting! But with a v11 Linux Repo/Proxy all-in-one box, you could not use the new hardened repository-feature. So I think we will not see very much of these Linux Veeam appliances in the near future.


Hum it will depend of the customer infrastructure, All in one box seems very interessting with infra with a large san.
From my little experience some people are moving to HCI hardware (vsan ready...) to reduce cost etc so Proxy on physical seems useless?
XFS repo with immutability is awesome without forgetting (reflink, fastclone, gfs...)

 

I read that too :grin: , unfornutately i don’t have an Apolo to play with at the moment.

https://forums.veeam.com/veeam-backup-replication-f2/veeam-v11-numa-awareness-t71923.html


Great recap here @vNote42  - and you are right the Forum digest is the best source to get some of the best information first here at Veeam.


Hum it will depend of the customer infrastructure, All in one box seems very interessting with infra with a large san.
From my little experience some people are moving to HCI hardware (vsan ready...) to reduce cost etc so Proxy on physical seems useless?
XFS repo with immutability is awesome without forgetting (reflink, fastclone, gfs...)

 

I read that too :grin: , unfornutately i don’t have an Apolo to play with at the moment.

https://forums.veeam.com/veeam-backup-replication-f2/veeam-v11-numa-awareness-t71923.html

Thanks or the Link! Very nice, indeed! 

So we know even the storage systems: HPE 3PAR/Primera and HPE Nimble!


 

  • It would also be interesting to know what the source storage was. :thinking:

 

I was also quite surprised by the numbers which Anton mentioned in the last community digest and wondered what SAN/source storage was used and with which configuration. I hope that we’ll see more of that infrastructure in a white paper or something similar.

 

EDIT: Looks like I’m to late and didn’t refresh the page :sweat_smile:


Gostev Veeam R&D Forums Digest newsletter an excellent tool for new information on Veeam.
I've been receiving great news for years, great!


I have following his newsletter since the days he started it. It's in my top 5 blogs/podcast for the daily digest. 

Thanks for summarizing and sharing it here.


Gostev Veeam R&D Forums Digest newsletter an excellent tool for new information on Veeam.
I've been receiving great news for years, great!

I have following his newsletter since the days he started it. It's in my top 5 blogs/podcast for the daily digest. 

Thanks for summarizing and sharing it here.


The mail from Anton is usually the first one I read on Monday mornings :)

I’ve often learned/discovered topics which I normally would have missed, so thanks @Gostev 


[Update]

Read this post to get very detailed information about the setup: what and why!

https://forums.veeam.com/veeam-backup-replication-f2/veeam-v11-numa-awareness-t71923.html#p401059


https://psnow.ext.hpe.com/doc/a50000150enw?jumpid=in_lit-psnow-red

HPE Reference Architecture for Veeam Availability Suite with HPE Apollo backup target

It’s for refs but from my pov it could be adjusted for XFS


@vNote42 : Thanks 


11.4GB Read Speed??? Whoa! I thought my 3GB was fast (using BfSS and Nimble Vol’s) :joy:


11.4GB Read Speed??? Whoa! I thought my 3GB was fast (using BfSS and Nimble Vol’s) :joy:

fast is never fast enough :sunglasses:  


11.4GB Read Speed??? Whoa! I thought my 3GB was fast (using BfSS and Nimble Vol’s) :joy:

fast is never fast enough :sunglasses:  

TRUTH! 😂


Registered now !


Comment