Solved

Replication Job, Storage vMotion source vms?


Userlevel 7
Badge +22

Hey Everyone,

 

As I understand it Veeam is fully vMotion aware. That includes storage vMotion. So if someone storage vMotions a VM the replication job should not have any issues?

 

However, if someone moves the replica to a different datastore, based on @MicoolPaul best answer

 then you would need to remove from the configuration database and seed and  map in the replication jobs to the replicas on the new location.

 

 

icon

Best answer by MicoolPaul 29 August 2023, 18:39

View original

13 comments

Userlevel 7
Badge +22

@MicoolPaul ‘s Best Answer :) for some reason it did not link on the post. 

Userlevel 7
Badge +6

Veeam, being vCenter-aware, should see that the VM was moved to a different datastore and still put the replica’s in the new location with the VM.  I’ve done this a few times, mostly related to running out of space on my replica datastore because CDP replica’s consumed more space than expected.  It looks like remapping the replica to the replica VM may be required if it doesn’t follow automatically, but mine have all been automatic.  Michael’s solution seems to be an “if all else fails” sort of solution as I’ve never had to do that.

Userlevel 7
Badge +20

Hi, that’s correct, the source is found via its vCenter MoRef ID, but the replica is defined to exist within a specific datastore as it’s a Veeam “owned/maintained” VM. You’ve told it you want it to reside within a specific datastore within your replica job. So if you do storage vMotion then the files are missing that Veeam expects, hence you’ve got to fix it up or Veeam will want to recreate it where you told it you want the VM to exist

Userlevel 7
Badge +17

That sounds exactly correct, from the Replica side...yes. Source side, I’ve VMotion/SVMotion many times over with no issues. Correct Geoff.

Userlevel 7
Badge +6

Now you guys are going to make me go test this, huh….fine….I’ll report back and let you know if I’m crazy or not….

Userlevel 7
Badge +6

@MicoolPaul - 

Does any of this behaviour change if using Cloud Director with a specified storage policy?

Userlevel 7
Badge +6

Can confirm that I’m not crazy...or at least not this time!  😜

I moved a couple of VM’s from my DR datastore to my production datastore.  I think both complained about wanting a consolidation of the VM’s after the migration.  One I deleted all snapshots and it came back clean (no longer requested consolidation).  The other I left as is.  I had to run a backup as my replication jobs are using the backup repo for seed data rather than replicating from the production VM, but I then started my replica job and it added the new replica restore point (snapshot) to the existing VM at it’s new production datastore location. 

For the fun of it, I moved them back to the DR datastore, ran another round of backups so that I had a new restore point, noted that both had replica restore points but it didn’t complain of needing consolidation this time, but then ran the restore job and both still succeeded successfully. 

I will note that looking at the datastores, both for the move to the prod datastore and for the move back, the Storage vMotion did leave some orphaned files behind that were not critical on the old datastore that I manually deleted.  I’m guessing this had to do with storage vMotion of VM’s with snapshots in place but were not critical to the VM functionality.  Had there not be snapshot in place on those VM’s, I’d assume there would have been no orphaned files, but I figured it worth mentioning.  I will also note that there was no VM mapping in place to connect the production VM’s to their corresponding replica’s.  I’m a little curious of mapping was already in place in the replication job if I would have had the same results.

In the end, it worked great for me, but as noted previously, I’m guessing this is not going to be 100% and may require extra steps if something is out of whack and the VM’s need to be remapped to their new location or at worst case, unregistered and reregistered within the replication job.

Userlevel 7
Badge +6

@MicoolPaul - 

Does any of this behaviour change if using Cloud Director with a specified storage policy?

I can’t speak to this as I don't have vCD running.

Userlevel 7
Badge +6

We have datastore groups setup for DRaaS storage. I point the hardware plans at the group and then can storage vmotion replicas without issue to balance datastores if required.

The also works well when I do replicas on our own infra as well outside of cloud connect.

Userlevel 7
Badge +14

I’m with @dloseke, Storage vMotion of replicas shouldn’t cause any issues. The configured datastore in the replica job only matters during the inital sync. During following replications, Veeam doesn’t care where the replica is located as long as it’s MoRefID doesn’t change.

Userlevel 7
Badge +8

Interesting. I’m testing replicas right now and this didn’t come up as I was not going to storage vMotion it.

 

To get even more complex, the Datastore is already replicated via SAN replication as I was using it for some SRM testing and other manual testing of DR purposes. (Replicating SAN volumes with immutable snapshots on the SAN etc.)

 

Happy to see this before I have to reseed it. 

Userlevel 7
Badge +6

I was using it for some SRM testing 

 

:Gasp:

Nah...there’s something to be said for SAN-based replication IMO.  SRM required some setup for one I did a few years ago and it worked pretty well with Dell SC (Compellent) arrays.  Just annoyed me (and caused the client to to have to buy another array) because even though SC can replicate with PS (Equallogic), the SRA has to match on each end (and SC and PS use different SRA’s) so SRM was of no use until they matched or else I would have to go a different route (vCenter-based VM replication - or Veeam of course but they weren’t using Veeam).

 

Userlevel 7
Badge +8

100% agree.

 

I’m currently working towards more workloads with Veeam using Replicaiton and CDP. 

SRM+SAN replication is not cheap.

 

That being said, I’ve used it for 5+ years and it has been rock solid. SAN replication has some benefits, but also, it can be a pain at times too.

If you have a slow link, or want consistency, using something like IBM’s global mirror with change volumes keeps things consistent in a DR event. This is the entire reason for using SRM in the first place. 

It allows the data to be copied to a snapshot, then the snapshot syncs, then the snapshot gets committed at DR.  You could end up with upto 5 minutes of data loss, but if the link breaks you don’t have data loss for the data in transit as in most asynchronous transfers.

Going active active is an option, but even on a dedicated link, it adds latency. 

 

My choice of SRM for critical systems with GMCV’s and Veeam Replication is working great with Veeam slowly taking over more and more. CDP is also good but I enjoy the multiple restore points on the other end too. 

 

 

Comment