Skip to main content

Hi community,

 

We recently performed a failover of vm100 to maintain server operations during a scheduled upgrade. The replica VM is currently running as expected.

However, we are now unable to initiate a failback to the production site. The process fails due to an invalid disk configuration on the running replica vm100. When attempting to edit the VM settings to remove the problematic disk, vSphere returns the following error:

"Invalid configuration for device '0'"

Scsi 0.5 mapped to Hard disk 6 in vm100_replica.

Supposedly, in original VM, there should be no Hard Disk 6. But the replica somehow attached the Hard disk 6

 

Does deleting the oldest VM snapshot will help anything.

 

Is there a way to delete this Hard Disc 6? 

 

Thanks!
 

So additional situation to this case is.

 

Might be can't opt to restore the previous snapshot because there has been a lot of user activity. So the current state is the most up-to-date.

so maybe if we do a permanent failover, where our replica VM becomes the production VM, that should be okay too, right?. 


Hi ​@Naufal  , This issue is usually caused by ESXi not removing the fallback snapshot correctly during failback, leaving the VM in a "consolidation needed" state with invalid disk sizes.

 

What you can do:

  1. Contact Veeam Support to safely repair the VM state.

  2. If possible, try failing back to a different restored VM or restore the VM fresh and fail back to it instead.

More details are in Veeam KB2113: https://www.veeam.com/kb2113


So additional situation to this case is.

 

Might be can't opt to restore the previous snapshot because there has been a lot of user activity. So the current state is the most up-to-date.

so maybe if we do a permanent failover, where our replica VM becomes the production VM, that should be okay too, right?. 

@Naufal  Permanent failover is safe in this case. Perform the permanent failover, then update your backup and replication plan to use DR VM (vm100_replica) as the new source. Create a new replication job in the reverse direction to replicate back to the original site.


So additional situation to this case is.

 

Might be can't opt to restore the previous snapshot because there has been a lot of user activity. So the current state is the most up-to-date.

so maybe if we do a permanent failover, where our replica VM becomes the production VM, that should be okay too, right?. 

@Naufal  Permanent failover is safe in this case. Perform the permanent failover, then update your backup and replication plan to use DR VM (vm100_replica) as the new source. Create a new replication job in the reverse direction to replicate back to the original site.

This is what I would recommend as well.  Try this and let us know how it goes.


Though the KB Waqas provided may help...it looks to be a bit complex. The safest route would be what you suggested to perform Permanent Failover, then change your Replication job to revert the perm failed over VM back to DC1. Eventually, you can perform a Perm Failover of the new replicated source, then get back to replicating to your DC2 locale.

Best.


Hi guys,

An update from my last  comment.

so what i did was 
Create a new replication job using the current replica VM as the source, exclude Disk 0:5.
Set the production site as the target.
Perform a failover, followed by a permanent failover.

no ghost disc any longer and for now the server running as usual.

 

Thanks again for all your advice and support!

 


Glad to hear you got it sorted ​@Naufal  ! 


Great to hear you got the issue resolved.


Comment