Skip to main content
Solved

Vmware Backup Failure "hexadecimal value 0x00"


Hello everyone,

 

This morning during my VMware VM backup job, I encountered the following error:

Error: Cannot proceed with the job: existing backup meta file 'Prio 0.vbm' on repository 'ds_veeam_sanctl03' is not synchronized with the DB. To resolve this, run repository rescan

 

As explained in the message, I initiated a rescan of the datastore on my NAS. The scan failed with this error:

11/01/2024 09:10:58 Warning    Failed to import backup path nfs3://xxxxx:/|volume1|ds_veeam|Prio 0|Prio 0.vbm Details: '.', valeur hexadécimale 0x00, est un caractère non valide. Ligne 1, position 1.

 

 

I did not find any information about this error in the knowledge base or other forum messages. I am using Veeam Backup & Replication 11.0.1.1261 P20220302.

 

Thank’s for advance.

Philippe.

Best answer by MarkBoothman

@flipflip That happens if you have community edition. You mentioned that they worked to another datastore so it does look to be the target config that's the issue here.

I’d also suggest upgrading to V12.1.

 

 

View original
Did this topic help you find an answer to your question?

17 comments

MarkBoothman
Forum|alt.badge.img+7
  • Veeam Legend
  • 197 comments
  • January 11, 2024

Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.


dips
Forum|alt.badge.img+7
  • Veeam Legend
  • 808 comments
  • January 11, 2024

Additionally, does Veeam have access to the repository? 


  • Author
  • Comes here often
  • 10 comments
  • January 11, 2024
dips wrote:

Additionally, does Veeam have access to the repository? 

Yes, Veeam can access the repository without any issues. I also tested directly from the NAS (Synology), where the repository is located, by creating a folder in the directory to check if a write error occurred, but everything is fine.

 

MarkBoothman wrote:

Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.

I believe I have enough space, but will I retain the history in case I need to restore a VM to its state from 3 days ago?


MarkBoothman
Forum|alt.badge.img+7
  • Veeam Legend
  • 197 comments
  • January 11, 2024

@flipflip I’d say that’s the best approach as you will still be able to recover from the old chain.

Let us know how you get on.


dips
Forum|alt.badge.img+7
  • Veeam Legend
  • 808 comments
  • January 11, 2024
flipflip wrote:
MarkBoothman wrote:

Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.

I believe I have enough space, but will I retain the history in case I need to restore a VM to its state from 3 days ago?

Yes, you will. This will depend on the restore points you have set in the backup job. If you only have a couple of restore points defined and take multiple backups, then you will lose the backup chain. For example, if restore points are set to 2 and you take 3 backups, there is a chance you will lose the oldest one. Again, this will depend on the settings you have defined. 


  • Author
  • Comes here often
  • 10 comments
  • January 11, 2024

Okay, I will try to initiate an active full. Is it possible to schedule it within the job, or is it mandatory to launch it manually?


MarkBoothman
Forum|alt.badge.img+7
  • Veeam Legend
  • 197 comments
  • January 11, 2024

In the advanced settings on the job you can enable Active full backup. Ensure the day is set for when the job runs. Don’t forget to disable the option once the job has ran.

Backup Settings - User Guide for VMware vSphere (veeam.com)


  • Author
  • Comes here often
  • 10 comments
  • January 15, 2024

Hello everyone,

This weekend I initiated a backup in Active Full mode, and once again, I encountered the same error.

I will conduct write tests on the hard drives used for the datastore.


MarkBoothman
Forum|alt.badge.img+7
  • Veeam Legend
  • 197 comments
  • January 15, 2024

@flipflip do you have another device you can use as a temporary repo?

Would be interesting to see if you get the same error writing to a new repo. I suspect this is local to the repo as the Active Full creates a new chain.

It may also be worth raising with support as well, even if you have community edition support is provided on a best endeavours basis.


  • Author
  • Comes here often
  • 10 comments
  • January 17, 2024

Hello,

 

the test on a new datastore on another NAS went well. I'm allowing several backups to be performed to see if the issue recurs.

 

At the same time, I've just opened a support ticket: 07092821.

 

Thanks,

Philippe.


MarkBoothman
Forum|alt.badge.img+7
  • Veeam Legend
  • 197 comments
  • January 17, 2024

Thanks for the update. Out of interest have you cold power cycled the NAS that you’re having issues with?

That may well be worth trying as well.

 


  • Author
  • Comes here often
  • 10 comments
  • January 17, 2024

No, I didn't even think about it ;) I'll give it a try.


  • Author
  • Comes here often
  • 10 comments
  • January 18, 2024

Hello everyone,

a quick update following the NAS reboot operation hosting the datastore. The job started, but the same error persists :(

So far, I haven't received any response from support.

The other job on the different datastore continues to work without any issues.


  • Author
  • Comes here often
  • 10 comments
  • January 24, 2024

Hello everyone,

 

Unfortunately, the ticket has just been automatically closed as no one from support has responded :(

 

I won't have any choice but to break the datastore and restart the backups of my VMs, hoping that the issue doesn't occur again.


MarkBoothman
Forum|alt.badge.img+7
  • Veeam Legend
  • 197 comments
  • Answer
  • January 24, 2024

@flipflip That happens if you have community edition. You mentioned that they worked to another datastore so it does look to be the target config that's the issue here.

I’d also suggest upgrading to V12.1.

 

 


Theissen Fabien

I do encounter the same issue, also being on version 12.1
In my case the NFS is used as a temporary storage aside of my StoreOnce units and the backup chain was newly created there.

As stated in older forum posts with similar issues, it could be either a DNS issue or simply services not responding in a timely manner...


Theissen Fabien
flipflip wrote:

Hello everyone,

 

Unfortunately, the ticket has just been automatically closed as no one from support has responded :(

 

I won't have any choice but to break the datastore and restart the backups of my VMs, hoping that the issue doesn't occur again.

Normally a ticket only gets closed automatically when the requester is no longer responding, but not the other way around … that’s not very professional


Comment