Skip to main content

Hello,

 

recently I had a problem on a production environment. I had to delete old replicas (old VMs that wasn’t on production site anymore), and after deleting 4 old VM replicas inside Veeam, two critical production VMs were also deleted from vSphere.

 

Fortunately this mentioned VMs had backup and their replicas did the job on recovering, which also proves that there wasn’t accidental deletion on the Veeam side.

 

From what I analyzed and read in some articles on the web, probably two of the replicas deleted were somehow linked to the production VMs. Maybe someone incorrectly managed replicas through vCenter/ESXi instead of Veeam, but is hard to know what happened exactly.

 

But apart of finding exactly what led to this, we still have to delete old replicas and I’m not comfortable on proceeding without having a mean of ensure that no VM will be deleted from production site.

 

So I’d like to know if anyone can help on how can I find some identifier of replicas that can lead to a comparison with the production VMs ids to avoid this problem.

 

I looked on VeeamBackupManager and Svc.VeeamBackup logs and found some ids but apparently they’re related to jobs themselves and not VMs.

 

The environment is running Veeam 11.0.1.1261 Standard Edition and vCenter/ESXi 6.7.

 

Thanks in advance,

Carlos

This is very odd.  Assuming the production machines were on, deleting should fail because they are in use.  Were they on?

That said, I’d be looking at the replicas to see if the drives are somehow mapped back to the production machines.  Not sure why, or maybe host depending on your configuration, but this should not be happening.

This does seem less like a Veeam issue to me and more of a vSphere issue though.  If it were me, I’d remove the VM’s from the replication job (or delete the job) and then manually remove the replica’s within vCenter (after obviously checking everything out for oddities.


I would have to agree with Derek that this seems more of a VMware thing versus Veeam and would check what he has described.  Let us know if there is anything else we can assist you with.


Hi Carlos - 

Welcome to the Community. I have never heard of such a behavior. From the Veeam side, to be able to delete the Replica, the Replica had to be in the Ready state, which obviously means not involved in any failover recovery process, nor can it be currently replicating to. When deleting the Replica, it obviously removes it from the datastore within vSphere as well as the VBR configuration DB..but that removal is the replica, not the source prod VM.

From the vCenter side, the VM has to be off. Hmm.. I'm trying to think how such a thing could happen. Or, at least appear like Veeam did so. 🤔 I wonder if @Mildur or @regnor have any thoughts? 

 


When you manually failover or power on replicas via vCenter, then Veeam isn’t aware of this. If you later delete the replica from Disk, Veeam will lookup the VM via it’s MoRef ID and delete it.  

@HideCarlos I would suggest that you contact Veeam Support and let them provide you a list of replicas and their MoRef IDs (stored in the configuration database). Or as an alternative, just remove the replica from configuration and manually clean up any leftovers in vCenter (which aren’t productive workloads).


@HideCarlos I know it is not the right answer, but when I have to do some “critical operations” on a few Veeam replicas, I prefer removing them from Veeam config and manually deleting from vCenter..just to avoid this kind of potential issue


Comment