Skip to main content

Hi, looking at the optimal veeam config for a veeam hardened repo with a storage platform that provides data reduction capabilities.

From a repo perspective, “decompress backup file data blocks before storing” should be set. As well as “use per-machine backup file”

aside from that, in the jobs, should I leave dedup and compress as OFF / dedup friendly?

I was thinking that the job should avoid the extra overhead to dedup and compress the backup files as when it gets to storage it will decompress anyway.

thanks in advance for the comments

If you have a deduplication appliance then you set that to dedupe friendly for sure.  Data Domain is an example of something for that.  If the repo can do this then set it otherwise leave the default plus the other settings.


If you have a deduplication appliance then you set that to dedupe friendly for sure.  Data Domain is an example of something for that.  If the repo can do this then set it otherwise leave the default plus the other settings.

it wont be a dedup appliance, it will be a common storage array with data reduction capabilities. Likely 3:1, 5:1


Hi Mike,

 

for me i think you have to test this scenario. For common dedup appliances there are best practices, i dont think that you have this for normal storage systems.

Take 50 VMs with dedup off and the same with dedup friedndly for let’s say 14 Days so that you have full and incrementals. Make sure to use XFS with Reflinks.


Please also think about encryption of your backup files which is strongly recommended, in case of encryption you cant get good efficiency with any storage system - or may you use native encryption of the storage system.


This is the reason why i like to use “simple but fast” block storage systems as repository and veeam should do the rest ;)

 

Matze


Hi Mike,

 

for me i think you have to test this scenario. For common dedup appliances there are best practices, i dont think that you have this for normal storage systems.

Take 50 VMs with dedup off and the same with dedup friedndly for let’s say 14 Days so that you have full and incrementals. Make sure to use XFS with Reflinks.


Please also think about encryption of your backup files which is strongly recommended, in case of encryption you cant get good efficiency with any storage system - or may you use native encryption of the storage system.


This is the reason why i like to use “simple but fast” block storage systems as repository and veeam should do the rest ;)

 

Matze

thanks Matze. Yes would need to test for the best combination with data reduction storage, not dedup storage. I assume my assumption is correct if the job does dedup and compress, storage repo will decompress as well.

I’m inclined to test with dedup/compress on the job and also with it off.

Encryption is also a consideration as you say, thanks


What i can say for “legacy reasons” 😛 a part of our repository is targeting an CIFS Share. Job settings in veeam are default. Even with this, we see some savings, but dedup on HDD is also adding load to the storage. So in the end you have to test. Which system do you plan to use?


If you have a deduplication appliance then you set that to dedupe friendly for sure.  Data Domain is an example of something for that.  If the repo can do this then set it otherwise leave the default plus the other settings.

it wont be a dedup appliance, it will be a common storage array with data reduction capabilities. Likely 3:1, 5:1

Test then with it on and off to see. Benchmark things to see.


There is a very detailed KB on this topic here: 

Deduplication Appliance Best Practices

It even has specific recommendation for some of the more popular deduplication appliances, as well as general configuration advice and explanations.


Comment