Skip to main content

 

Morning,

 

Unsure why it would say 0KB, I’d be interested to know if your vCenter account has generic administrator rights or if it has a custom role configured in case it doesn’t have sufficient permissions to read the VM. What happens when you try to back it up? That would provide some insights.

 

Is there anything special about the disk configuration? Such as independent disks, or using RDMs within vSphere?

I assume you haven’t configured any exclusions on the job for the VM such as not processing specific disks.

 

I’d start by completing the job and trying to back it up and see what the output says.


the disk was configured using independent non persistent.

 
this was the first warning. Then after rerunning the backup was successful. My biggest worry it is showing the disk with 0 B
of which the vm is 500gb 

 


​​​​​​
 
​​​​

 


Hi, this is your problem. Independent disks can’t be snapshoted so Veeam can’t process them.

 

You’d need to use the Veeam Agent if the guest OS supports it. Or configure the disks to be standard dependent persistent disks if the requirements of the server support it.


Any changes in vCenter? I know this picture when a VM is moved from one vCenter into an other without applying this change to the backup job.


yes there were  changes in vcenter

 

 


Hi, this is your problem. Independent disks can’t be snapshoted so Veeam can’t process them.

 

You’d need to use the Veeam Agent if the guest OS supports it. Or configure the disks to be standard dependent persistent disks if the requirements of the server support it.

Exactly, independent disks cannot be backed up with a VMware backup job.

What I'm wondering, if you configure them as non-persistent, after every reboot the disk will be reset. Is this intended to be like that?


yes there were  changes in vcenter

 

 

What kind of changes? Did you restore vCenter or replaced it with a new installed?


the disk was configured using independent non persistent.


 
this was the first warning. Then after rerunning the backup was successful. My biggest worry it is showing the disk with 0 B
of which the vm is 500gb 


 


​​​​​​

 
​​​​

 

Please Click SunSystems in your first screenshot and post the shown log entries then.


 

Probably something go wonky in the vCenter cache.  To clear they removing the VM from the job, then click on the rescan button (the little circle arrows) in the add dialogue page and then read the VM.  It should show the correct VM size.


@vmJoe & @vNote42 are we sure about the size should reflect differently considering the disks are non-persistent independent disks?


Also, if you want to backup this data and they really are non-persistent disks, you may want to install the agent and get a backup before you reboot the machine or perform a V2V from within the OS.  I haven’t tried this, but I would think that either would work to capture the data inside of the VM.  And then restore the VM using standard dependent/persistent disks.  But should you reboot, the VM should revert back to it’s initial bootup state as I understand it (I’ve only read about non-persistent disks…I’ve never actually used one.


After a bit more reading, a warm reboot may still be fine but I’m not sure I’d risk it.  Powering off/shutting down the VM will cause the redo log to be deleted and all changes lost.  So with that said, you *might* also be able to clone the VM while it’s running but clone it to a standard dependent disk.


 

Probably something go wonky in the vCenter cache.  To clear they removing the VM from the job, then click on the rescan button (the little circle arrows) in the add dialogue page and then read the VM.  It should show the correct VM size.

 

This is when I usually see this...removing the VM and adding the VM back to the job tends to fix this.  Could be due to the VM being unregistered and reregistered from the host/vCenter, maybe removing the vCenter from Veeam somehow without having deleted the jobs so the ID used no longer matches and the VM isn’t found, etc. 

That said, it looks to me like the independent non-persistent disk may be an issue here.


@vmJoe@vNote42 are we sure about the size should reflect differently considering the disks are non-persistent independent disks?

Just confirmed in my lab that any kind of independent disk isn’t included in the total size. So you were right 😉


@vmJoe@vNote42 are we sure about the size should reflect differently considering the disks are non-persistent independent disks?

If there is a regular VMDK that size should show up here.  Typically when I see this it’s do to the vCenter cache service. But yes - independent disks can affect this sizing.


Un-registering the vm through vCenter / re-scanning the vc in Veeam, then removing  / re-adding the vm from the job solved the issue for me. 


Comment