Experience with HPE Apollo or other High Density server as backup repository?



Show first post

61 comments

Userlevel 6
Badge +1

High Performance. But the error cleared the same second it started, I think this was just a hickup.

Userlevel 7
Badge +13

I wanted to post some Veeam benchmarks too, but this doesn’t make much sense as I can see that jobs are limited by network. We have a 40/50 GbE 547FLR network adapter with QSFP → SFP adapter, so this is currently limited by 2x10 GbE bonded linux interfaces. I see ~1,3GB/s and Veeam is showing network or source as bottleneck. Our network equipment will be replaced in the next months, then I will switch to 40GbE. As we are testing with v10, I also can not use the Apollos as proxy to backup directly from storage snapshot. Our existing proxies are sending the data to the Apollo via network.

One advice: make a large /tmp fs is you use a dedicated partition or lvm volume. Veeam is writing a lot to /tmp and my first jobs failed because it was only 4GB.

 

Thanks for your advise!

Did you try different nic-bonding policies? I am not the network-guy, but it should work - with correct policy - to use both uplink simultaneously when multiple backup using multiple proxy server to write into your Apollo server. 

 

Userlevel 6
Badge +1

I configured balance-alb which should work for incoming traffic too but I did not try other modes. I see that both interfaces are used, just not with 20Gbit/s. But bonding never gave me full line speed even with multiple senders.

Userlevel 7
Badge +13

Yes, I think its session-based land balancing. So with just one stream you will be able to use just one link.

Userlevel 7
Badge +8

For my curiosity what kind of object storage are you using? Example of performance for copy job?

i will deploy on rhel 8 too, we’r using kickstart or satellite provisionning. Do you deploy hardening on you repo? rhel csi?

Userlevel 6
Badge +1

For capacity tier offloading we use AWS S3 buckets, we want to look into Wasabi in the next couple of weeks now that they have object lock too.

 

The servers were deployed from Linux team, they use satellite. No special hardening yet.

Userlevel 7
Badge +8

Hey Ralf, any news about your test?

Userlevel 7
Badge +8

Hey @Ralf , when you said var is growing rapidly ? What is about? Logs? Cache?  How many mb/gb per days/jobs?

I’m wonderiing, how did you mount the xfs partition? Are you using lvm?

Userlevel 7
Badge +13

Hey @Ralf , when you said var is growing rapidly ? What is about? Logs? Cache?  How many mb/gb per days/jobs?

I’m wonderiing, how did you mount the xfs partition? Are you using lvm?

I would also be interested if you are using LVM. IMHO it is not necessary for repositories.

Userlevel 6
Badge +1

Yes, we use LVM by default. But not for the Veeam extents. /var has a size of 25 GB now, 17 GB are used, 12 GB Veeam logs.

 

xfs mount options (not much tuning):

… type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=6144,noquota)

Userlevel 6
Badge +1

Just a note: we configured the 4510 servers with the 547FLR-QSFP 40G network adapter. This was not the best idea as the max supported DAC cable lenght is 5m and there is only a MPO transceiver or AOC as alternative. BiDi transceiver like “HPE X140 40G QSFP+ LC BiDi MM” are not supported for this adapter. There is an HPE advisory that certain HPE 40/100 network adapters does not work with BiDi due to a power problem…. As we do not use MPO or AOC in our datacenter, we probably have to replace the adapters now.

Comment