Solved

Vmware Backup Failure "hexadecimal value 0x00"


Userlevel 3

Hello everyone,

 

This morning during my VMware VM backup job, I encountered the following error:

Error: Cannot proceed with the job: existing backup meta file 'Prio 0.vbm' on repository 'ds_veeam_sanctl03' is not synchronized with the DB. To resolve this, run repository rescan

 

As explained in the message, I initiated a rescan of the datastore on my NAS. The scan failed with this error:

11/01/2024 09:10:58 Warning    Failed to import backup path nfs3://xxxxx:/|volume1|ds_veeam|Prio 0|Prio 0.vbm Details: '.', valeur hexadécimale 0x00, est un caractère non valide. Ligne 1, position 1.

 

 

I did not find any information about this error in the knowledge base or other forum messages. I am using Veeam Backup & Replication 11.0.1.1261 P20220302.

 

Thank’s for advance.

Philippe.

icon

Best answer by MarkBoothman 24 January 2024, 11:59

View original

17 comments

Userlevel 7
Badge +6

Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.

Userlevel 7
Badge +7

Additionally, does Veeam have access to the repository? 

Userlevel 3

Additionally, does Veeam have access to the repository? 

Yes, Veeam can access the repository without any issues. I also tested directly from the NAS (Synology), where the repository is located, by creating a folder in the directory to check if a write error occurred, but everything is fine.

 

Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.

I believe I have enough space, but will I retain the history in case I need to restore a VM to its state from 3 days ago?

Userlevel 7
Badge +6

@flipflip I’d say that’s the best approach as you will still be able to recover from the old chain.

Let us know how you get on.

Userlevel 7
Badge +7

Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.

I believe I have enough space, but will I retain the history in case I need to restore a VM to its state from 3 days ago?

Yes, you will. This will depend on the restore points you have set in the backup job. If you only have a couple of restore points defined and take multiple backups, then you will lose the backup chain. For example, if restore points are set to 2 and you take 3 backups, there is a chance you will lose the oldest one. Again, this will depend on the settings you have defined. 

Userlevel 3

Okay, I will try to initiate an active full. Is it possible to schedule it within the job, or is it mandatory to launch it manually?

Userlevel 7
Badge +6

In the advanced settings on the job you can enable Active full backup. Ensure the day is set for when the job runs. Don’t forget to disable the option once the job has ran.

Backup Settings - User Guide for VMware vSphere (veeam.com)

Userlevel 3

Hello everyone,

This weekend I initiated a backup in Active Full mode, and once again, I encountered the same error.

I will conduct write tests on the hard drives used for the datastore.

Userlevel 7
Badge +6

@flipflip do you have another device you can use as a temporary repo?

Would be interesting to see if you get the same error writing to a new repo. I suspect this is local to the repo as the Active Full creates a new chain.

It may also be worth raising with support as well, even if you have community edition support is provided on a best endeavours basis.

Userlevel 3

Hello,

 

the test on a new datastore on another NAS went well. I'm allowing several backups to be performed to see if the issue recurs.

 

At the same time, I've just opened a support ticket: 07092821.

 

Thanks,

Philippe.

Userlevel 7
Badge +6

Thanks for the update. Out of interest have you cold power cycled the NAS that you’re having issues with?

That may well be worth trying as well.

 

Userlevel 3

No, I didn't even think about it ;) I'll give it a try.

Userlevel 3

Hello everyone,

a quick update following the NAS reboot operation hosting the datastore. The job started, but the same error persists :(

So far, I haven't received any response from support.

The other job on the different datastore continues to work without any issues.

Userlevel 3

Hello everyone,

 

Unfortunately, the ticket has just been automatically closed as no one from support has responded :(

 

I won't have any choice but to break the datastore and restart the backups of my VMs, hoping that the issue doesn't occur again.

Userlevel 7
Badge +6

@flipflip That happens if you have community edition. You mentioned that they worked to another datastore so it does look to be the target config that's the issue here.

I’d also suggest upgrading to V12.1.

 

 

I do encounter the same issue, also being on version 12.1
In my case the NFS is used as a temporary storage aside of my StoreOnce units and the backup chain was newly created there.

As stated in older forum posts with similar issues, it could be either a DNS issue or simply services not responding in a timely manner...

Hello everyone,

 

Unfortunately, the ticket has just been automatically closed as no one from support has responded :(

 

I won't have any choice but to break the datastore and restart the backups of my VMs, hoping that the issue doesn't occur again.

Normally a ticket only gets closed automatically when the requester is no longer responding, but not the other way around … that’s not very professional

Comment