Skip to main content

Seeking field experience for a Veeam repository on HPE Alletra 4120 (Linux Hardened repo) does SSD cache drives deliver measurable gains (ingest/synthetic/merge) compared to simply adding spindles?

Hi ​@imadam , 

 

I tested this awhile back on some light workloads, and there was some improvement for sure. I wouldn’t say it’s essential since the synthetic operations (Synthetic Fulls / Merge operations) use Fast Clone and thus already get pretty good speed benefits.

But for larger environments you should see a more smooth operation.

Basically, if you’re in a position to configure it it’s probably best as it will reduce disk contention: https://docs.oracle.com/en/operating-systems/oracle-linux/10/xfs/xfs-CreatinganXFSFileSystem.html#notable_xfs_feature_options

However, keep in mind this will primarily assist for sequential IO, random IO (e.g., Instant Recovery, File Level / Application level restores) will likely not benefit greatly from the external metadata disk

 


Hi ​@imadam -

I use Nimble HF40s (basically, earlier model to the Alletra) for Hardened Repo storage connected to HPE DL360 G9s running Ubuntu 24.04 on XFS with Fast Clone.

Yeah..you are going to gain some improvement indeed using the cache; not sure how much as I never tested...just implemented. 

Interestingly...looks like HPE has a Veeam Hardened Repo Implementation Guide; more like an array recommended config guide when using for Veeam Hardened Repo:

https://www.veeam.com/veeam_hardened_repository_installation_for_hpe_alletra_storage_server_4120_vrd.pdf , which they seem to recommend 50% cache (see pg. 12)


Hi ​@imadam  I’m creating some posts regarding  DAS Repository, I’m updating it weekly. have a look at those posts.

 

But, you should considerer for what use you will use the repo, like, for cache repository (when taking backup from file shares, NAS), replica metadata or meta extent, for those repos, you should use SSD devices.

 

You also need to check the network connection, it might be a bottleneck when you have only SSD devices.

As you are using Linux, I would recommend you to check the best LVM configuration.

 

if you have some time, check those articles, they will give you more backgorund on DAS.

 

https://community.veeam.com/blogs-and-podcasts-57/block-repository-das-using-internal-disks-part-i-11633

 

https://community.veeam.com/blogs-and-podcasts-57/block-repository-das-using-internal-disks-part-ii-11751

Hope that helps


Hi ​@imadam -

I use Nimble HF40s (basically, earlier model to the Alletra) for Hardened Repo storage connected to HPE DL360 G9s running Ubuntu 24.04 on XFS with Fast Clone.

Yeah..you are going to gain some improvement indeed using the cache; not sure how much as I never tested...just implemented. 

Interestingly...looks like HPE has a Veeam Hardened Repo Implementation Guide; more like an array recommended config guide when using for Veeam Hardened Repo:

https://www.veeam.com/veeam_hardened_repository_installation_for_hpe_alletra_storage_server_4120_vrd.pdf , which they seem to recommend 50% cache (see pg. 12)

“For the HPE SR932i-p smart array controller used in HPE Alletra Storage Server 4140/4120 that comes with an 8 GB wide cache, HPE recommends assigning 50% of controller cache to writes, which is the maximum percentage allowed by the controller. In addition, HPE recommends assigning 50% to writes using the Cache Manager function of the HPE Smart Storage Administrator application, as presented in Figure 13. In testing for this solution, performance was measured with and without the writeback cache enabled. In the configuration tested, Veeam backups were found to be faster when writeback cache was used at the recommended ratio.”

this is controller cache. I was wandering about separate SSD for this purpose. Looks like it is not worth it. Veeam doesn’t recommend it anyway.

Thanks. 


Ah, that’s correct. That was the only documented information I could find. Would separate SSD help? Probably. Would it be so much to be worth the slight trouble to implement? Probably not. Your call really.

Best.