Skip to main content

I’m new to Veeam and I really like how easy it is to create scheduled backups and to retrieve individual files.

Does anyone know what kind of compression Veeam uses when they create either increment or full backups?  Thanks, Steve

Please forgive me for asking a question that has been discussed here before.

I did search but didn’t find anything but when I created this entry it showed me similar items and I found all of the info I was looking for.  Thanks to everyone who has posted about Veeam compression. It has been a great deal of help.


@steveneashcraft 

about 2:1

Data Compression and Deduplication - User Guide for Microsoft Hyper-V (veeam.com)

it depends on what settings you have entered in the default job.
default is optimal

 


@steveneashcraft

about 2:1

Data Compression and Deduplication - User Guide for Microsoft Hyper-V (veeam.com)

it depends on what settings you have entered in the default job.
default is optimal

 

Further to this is will also depend on your backend storage type and file system you are sending backups to like ReFS, XFS, etc.  This will play a role as some vendors have best practices for storage settings like the Storage Optimization setting and possibly disabling Dedupe if using a Dedupe appliance, etc.

Be sure to check the Best Practices guide here - Welcome - Veeam Backup & Replication Best Practice Guide


Compression is part of the data efficiency techniques used to reduce size, it’s a multi-prong approach.

 

To improve storage efficiency you have the following:

 

  • Compression: this is customisable and you can set to dedupe friendly should you use a deduplication appliance/Windows Server with deduplication enabled (I don’t see any benefits in doing WS with dedupe but just for completeness)
  • Deduplication: Veeam will reuse existing blocks within the chain to avoid backing up data a second time.
  • Deleted Blocks & swap file exclusions: Veeam will exclude these blocks from being processed by default to reduce backup size, as these blocks & files aren’t often required.
  • Block Size: Veeam offers optimisations for tweaking the block size it fetches as part of changed block tracking (CBT). Larger block sizes are faster to process as there’s less of them to track, but if 1KB of a block has changed but you’re fetching 4MB blocks, you’re backing up nearly 4MB of unchanged data alongside this, so incremental files are larger. This inversely causes more API calls when you’ve got more smaller blocks especially when dealing with object storage with metered API calls this can be undesirable.

Finally you have the block cloning/referencing between different backup chains via ReFS/XFS file system integrations to improve data efficiency further. Hopefully this is a good start!


Others provided the compression options you have, and the User Guide screenshots/links to discuss what Compression option to use & when. If you’re wanting to know what Compression algorithm Veeam uses, it isn’t readily available in the Guides. You have to do a little digging in the Forums. It was also discussed in the older Veeam VMCE course. The algorithms Veeam uses for compression are lz4 and zlib as shown in this Forum compression algorithm change suggestion post. Hope that helps.
 

Cheers!


@steveneashcraft  Good shout here that @coolsport00 calls out the compression engines: zlib and lz4. They are configurable, one thing I’ll add is the defaults are good for most use cases.

You can adjust them - but they are a balance of compute | time | storage efficiency. Many of the destinations that are integrated will pre-define the optimal settings. 


Comment