Skip to main content

Recovered VMs not powering up in Datalab test due to unable to access virtual hard disks.

Have a couple of large VMs with multiple disks (60 and 90 TBs and 10 disks on each VM) that will not power up due to the error above. We are using storage failover and we have daily replication of the production volumes to our DR volumes. Replication status on our Storage device show that replication is successful but vsphere shows host incompatibility/permission issue accessing the disks.

I don’t think that this is either a host or permission issue since the other virtual disks for the same VMs can be seen in vCenter. There are other VMs in the Orchestration and these are the only 2 servers that would not come up. Below is a screenshot from vCenter. 

 

First - there is no screenshot.  Second - are the VMware hosts the same version of ESXi?  To me if permissions are ok the incompatibility message says diferent versions of ESXi or HW version.


Not sure why the screenshot did not attach. Thanks for letting me know. Anyway, the virtual disks that prevent the VM from powering on show 0 for the size. The vmdks are present in the datastore. The error stack in vCenter shows that it is unable to access these vmdks.

 

 


I’m a bit confused at what you’re trying to do. You mention DataLab, which leads me to believe you’re attemping some kind of SureBackup Job; but then you hint at replication via your storage devices; and then you’ve mentioned a disk error in vCenter. So I’m a bit confused 🤷🏻‍♂️ Can you start from the beginning and share what you’re attempting, and how? If Veeam is involved here...what version are you using? Etc. Thanks ​@spider32 


Not sure why the screenshot did not attach. Thanks for letting me know. Anyway, the virtual disks that prevent the VM from powering on show 0 for the size. The vmdks are present in the datastore. The error stack in vCenter shows that it is unable to access these vmdks.

 

 

My guess here is the VM hardware version could be causing this.  Check that what your VM is at is supported on each environment.


We had in the past similiar problem with vSAN.

Some VMs disapeared and restore via veeam was successful, but did not appear data on vSAN. Just like ghost VMs, they were ...

It was caused by vSAN Bug due consumed capacity over 80% of datastore.

So maybe is not an issue with Veeam, but with platform itself ...


Support Case is the way to go most likely. It’s not correct that the disk sizes are 0, but would have to imagine there’s some warning or error in the logs, or perhaps the disks are encrypted in-guest and simply cannot be mounted, but you should see errors in the SureBackup job in that case. 

But strongly recommend a Support Case.


I have already opened a support case.

@coolsport00, we are using the Datalab a bit differently. We have the orchestration plan and the Virtual Routers. Once we run the datalab test, the VMs are actually put in NSX’s logical segments. So Veeam’s Orchestrator is mainly being used to register the VMs using the files from the replicated storage volumes. 

We are not using vSAN so this is not due to that 80% bug. 


Hi ​@spider32 -

Ah, ok...so you’re using VRO. Understood. Well, keep us posted on what Support says. Thanks for the update.


The Orchestrator is looking for directory below which is not present in the DR esxi hosts (but existing in all of the primary hosts). 

Not sure if manually creating the folders and copying the files would let us fix the issue but still would not let us know what is the cause.


The path there is a Datastore path (i.e. //vmfs/volumes/<uuid-of-datastore>/<vm-folder>/<vm-files>). Is the Datastore added to your DR ESXi Hosts? The //vmfs/... folder path is created when you add storage (Datastores) to Hosts. I haven’t played with VRO so not familiar completely with how it works entirely; and so not sure why it’s looking for a source Datastore the VM is originally stored on...at least, I assume that’s what it’s doing. 


We have a case opened with Broadcom to help identify what is causing the issue why //vmfs/volumes/<uuid-of-datastore>/<vm-folder>/<vm-files>  is being referenced when accessing the vmdks. It should just be showing the datastore/vmName.

 


The fix to this issue is to edit the vmx file to remove the //vmfs/volumes/<uuid-of-datastore>/<vm-folder>/ entries and reload the VM in the esxi host. Please see Broadcom article below:

 

https://knowledge.broadcom.com/external/article?articleNumber=343248

 

However, still do not know the root cause of the issue. 

 

Well...glad you at least have a resolution spider 👍🏻 Hopefully it’s just a one-off.


Definitely an interesting solution but glad to see you were able to address the issue.


Comment