Skip to main content

So, today I’m gonna show you a simple comparison with 1 VM per job versus 2 or more VM per jobs.

 

On my scenario I have 2 VM for my Active Directory environment with 40GB size each.

----------------------------------------------------------------------------------------------------------------------------

So, in first example I create 1 job per VM: 

In this shape it consume exactly 40GB on my repository:

 

----------------------------------------------------------------------------------------------------------------------------

In the second example a create a unique job for the same 2 VM’s:

 

And in this shape it consume 37GB on my repository:

----------------------------------------------------------------------------------------------------------------------------

 

How can we see deduplication of Veeam B&R gives a good space on our backups.

In this environment was only 2 small VMs, but now you can imagine how this impact an entire datacenter.

This is always an interesting discussion with clients. I know that with SOBR’s and per vm chains we do lose deduplication as well. 


Yes, the more VMs - with the same OS - in a job the more the deduplication will take effect.

And when you are using synthetic full backups on a ReFS or XFS repository you will save even more space….

I think when you are using one VM in all of your jobs or per VM chains than a deduplicating storage would be a good choice.


 

I think when you are using one VM in all of your jobs or per VM chains than a deduplicating storage would be a good choice.

 

Yes the only thing to watch is to make sure that the deduplicated storage has a landing zone so that synthetic operations are fast otherwise your run into rehydration hell 🙂. Something Veeam integrated like Exagrid etc


 

I think when you are using one VM in all of your jobs or per VM chains than a deduplicating storage would be a good choice.

 

Yes the only thing to watch is to make sure that the deduplicated storage has a landing zone so that synthetic operations are fast otherwise your run into rehydration hell 🙂. Something Veeam integrated like Exagrid etc

Yes, you are right @Geoff Burke , I was not precise enough with this topic. :sunglasses:


 

I think when you are using one VM in all of your jobs or per VM chains than a deduplicating storage would be a good choice.

 

Yes the only thing to watch is to make sure that the deduplicated storage has a landing zone so that synthetic operations are fast otherwise your run into rehydration hell 🙂. Something Veeam integrated like Exagrid etc

Yes, you are right @Geoff Burke , I was not precise enough with this topic. :sunglasses:

No probs.. you know what they say, your memory is really helped when you pick up a stick in fire and get burned” I picked up a dedupe stick once after REFS3 was introduced and fast clone turned into slow clown :)


Don’t even ask the crazy setup and but it there was mixing involved and was not what I wanted :)


 

 

We recommend you enable data deduplication if your backup or replication jobs contain several VMs that have a lot of free space on their logical disks or VMs that have similar data blocks — for example, VMs that were created from the same template. However, note that data deduplication may decrease job performance.

 


Comment