Question

Add/remove disks from replicated VM


Userlevel 2

I had to remove 2 virtual disks from a VM running in vSphere 6.7.
The operation was ok, and also the replica Job didn’t complain about that.

The problem is that the replica VM has still the old disks attached, while the source VM doesn’t.
So the modification in the source VM configurant was not reflected to the replica VM.

Question:
 - is it safe to remove the un-wanted vHDs from the replica VM using vCenter,
 - or else we need also to re-create the replica job?


 


7 comments

Userlevel 7
Badge +7

Hi @FMolinelli 

stop the job with the old replication to keep the replicated Vm.

remove the disks from the original Vm and create a new replication job.
After the new job has reached the desired restore points delete the old replica.
regards

Userlevel 2

Hi @Link State 

so I  have to create a new job after removing the disks from the original VM (I already had to remove them).
I cannot even use the old replica as a seed/mapping for the new job, after having removed the unwanted disks from it?
regards

Userlevel 7
Badge +20

Hi @FMolinelli , it’s a limitation I’d suggest you add your voice to requesting a change here:

https://forums.veeam.com/vmware-vsphere-f24/replica-job-doesn-t-delete-removed-disk-t72662.html

 

Hardware additions & changes get sync’d to the replication target, but if ‘something’ gets deleted on the source VM (no additional context to support my following statement here but: I’d believe this to just be disks, wouldn’t expect things like NICs to be replicated once removed, but would need testing to confirm).

 

In this scenario, Vladmir Eremin suggested starting a new replica. I’d be tempted to use the old replica as a mapping once you’d cleared out the old disks. Worst case it’s no good and you’ve got to seed from a backup or create a new replica anyway so might as well try and save some time! It’s also worth you resolving this sooner rather than later as the R&D topic above indicated that whilst failover was okay, failback wasn’t due to there being a missing disk. The user got around this by creating a blank disk to allow the failback to complete then removing the disk again afterwards.

Userlevel 7
Badge +8

I always think to be 100% sure, if the VM is small just create a new replica. 

This could be one to call for a low severity support ticket as well to get a definitive ansewer.

Coming from SRM, doing things to the replica usually isn’t ideal because the source doesn’t always know about it. 

 

If it was a HUGE VM, I like the idea of stopping it, removing the disks, then using the old one to map the new job. 

Userlevel 2

Hi @MicoolPaul 

thank you for your comment.
I think that at least replication job would have to raise a warning about source/target VM configs discrepancy, as this will lead to Fallback process failure ,,,, And Failback step is hardly included in replica tests, as it could harm the source VM!

More there’s no clear specification about which setting changes aren’t applied to the replica VM ( SCSI controller not ..  and could be many others)

Veeam documentation about replication is lacking a lot of caveat

I’ll try to proceed as per your suggestion, with a firs seeding from existing replica VM.

best regards

 

Userlevel 2

Hi @Scott 

...it’s a HUGE VM … of course!

Userlevel 7
Badge +8

Hi @Scott 

...it’s a HUGE VM … of course!

haha story of my life. Every time some is like “can we just clone/restore/migrate” etc. I have to say everything takes a bit longer when its 50TB. 

I have one over 125TB. I am about to devise a plan to split it into a few. At least the change rate is low so the incremental still finish in time, but the active full was rough. 

Comment