Skip to main content
Solved

Missing or corrupted metadata sections on several Backup to Disk jobs


I don’t know what to do anymore, perhaps somebody knows to help.

There were a minor problem on the local target RAID6, which was solved by the RAID controllers patrol read.

Before i noticed the issue the 3 chained B2D job were already running, failing and stopping. I remember hard stopping one because it was hanging in infinite loops.

From that time i was trying since a few days now to get those 3 chained B2D jobs running again.

But doesn’t matter what i do, if i try to run the job(s), run a health check, try to use the repair option which is given for the 1st of the chained group, if i try to disable the job and try all again…. it always ends very fast with the error:”Some metadata sections are missing or corrupted”.

It’s the same on all 3 jobs in the chained B2D group.

The source isn’t the problem, cause a backup to tape job running without any issue.

From my understanding since i didn’t find any other repair option in Veeam, the only route left is to delete and loose all restore points in this chained B2D group, or did i miss something?

 

Best answer by Chris.Childerhose

Yes your chains are corrupted and you need to run an Active Full for each job to start a new one.

View original
Did this topic help you find an answer to your question?

11 comments

Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8512 comments
  • Answer
  • April 2, 2025

Yes your chains are corrupted and you need to run an Active Full for each job to start a new one.


Tommy O'Shea
Forum|alt.badge.img+3
  • Experienced User
  • 125 comments
  • April 2, 2025

If you create a new backup job and point at the same source and destination repository, does it fail or succeed? If it succeeds, that might further indicate that the problem is with the backup files you have now.

Have you opened a support ticket with Veeam to better analyze the error logs?


  • Author
  • Comes here often
  • 13 comments
  • April 2, 2025
Chris.Childerhose wrote:

Yes your chains are corrupted and you need to run an Active Full for each job to start a new one.

Ok thank you, i haven’t thought on that.

That won’t be a problem.

Would i be able to access older restore points before the corrupted one(s) after a successfull active full?

Because if not, and the active full would the the first acessable restore point, i could also dump all and free space on that target. 


Chris.Childerhose
Forum|alt.badge.img+21
Lolek Bolek wrote:
Chris.Childerhose wrote:

Yes your chains are corrupted and you need to run an Active Full for each job to start a new one.

Ok thank you, i haven’t thought on that.

That won’t be a problem.

Would i be able to access older restore points before the corrupted one(s) after a successfull active full?

Because if not, and the active full would the the first acessable restore point, i could also dump all and free space on that target. 

A new active full starts a new chain so the other restore points are not available from that one.  If you cannot access them from the old restore points then you can delete them.


  • Author
  • Comes here often
  • 13 comments
  • April 2, 2025
Tommy O'Shea wrote:

If you create a new backup job and point at the same source and destination repository, does it fail or succeed? If it succeeds, that might further indicate that the problem is with the backup files you have now.

Have you opened a support ticket with Veeam to better analyze the error logs?

No i haven’t.

My only source reference is a the Tape job.

I don’t havve more that those 3 jobs in that chained group that utilizes the RAID6 target.


  • Author
  • Comes here often
  • 13 comments
  • April 2, 2025
Chris.Childerhose wrote:
Lolek Bolek wrote:
Chris.Childerhose wrote:

Yes your chains are corrupted and you need to run an Active Full for each job to start a new one.

Ok thank you, i haven’t thought on that.

That won’t be a problem.

Would i be able to access older restore points before the corrupted one(s) after a successfull active full?

Because if not, and the active full would the the first acessable restore point, i could also dump all and free space on that target. 

A new active full starts a new chain so the other restore points are not available from that one.  If you cannot access them from the old restore points then you can delete them.

Okay, so rationally speaking, if I can't use the older restore points for restoring after a new active full, then I might as well delete all the job restore points... just delete everything, free up some space, recreate the jobs, and then do an active full and start again.

Otherwise, lugging around old restore points that I can't do anything with anymore would just be a useless waste of storage space (if I understood correctly).

I just thought there was a Vulcan Death Grip I overlooked that would fix everything, but unfortunately, Murphy's Law struck :)


Chris.Childerhose
Forum|alt.badge.img+21
Lolek Bolek wrote:
Tommy O'Shea wrote:

If you create a new backup job and point at the same source and destination repository, does it fail or succeed? If it succeeds, that might further indicate that the problem is with the backup files you have now.

Have you opened a support ticket with Veeam to better analyze the error logs?

No i haven’t.

My only source reference is a the Tape job.

I don’t havve more that those 3 jobs in that chained group that utilizes the RAID6 target.

Well if you are ok to start clean with a new backup chain then remove the older restore points for space.  Otherwise you will need to test them if you need anything to restore at all.  All up to how you want to move forward.


Chris.Childerhose
Forum|alt.badge.img+21
Lolek Bolek wrote:
Chris.Childerhose wrote:
Lolek Bolek wrote:
Chris.Childerhose wrote:

Yes your chains are corrupted and you need to run an Active Full for each job to start a new one.

Ok thank you, i haven’t thought on that.

That won’t be a problem.

Would i be able to access older restore points before the corrupted one(s) after a successfull active full?

Because if not, and the active full would the the first acessable restore point, i could also dump all and free space on that target. 

A new active full starts a new chain so the other restore points are not available from that one.  If you cannot access them from the old restore points then you can delete them.

Okay, so rationally speaking, if I can't use the older restore points for restoring after a new active full, then I might as well delete all the job restore points... just delete everything, free up some space, recreate the jobs, and then do an active full and start again.

Otherwise, lugging around old restore points that I can't do anything with anymore would just be a useless waste of storage space (if I understood correctly).

I just thought there was a Vulcan Death Grip I overlooked that would fix everything, but unfortunately, Murphy's Law struck :)

Yeah no magic wand for this.  LOL

Sounds like starting over is the way to go and gain disk space back.


  • Author
  • Comes here often
  • 13 comments
  • April 2, 2025
Chris.Childerhose wrote:
Lolek Bolek wrote:
Tommy O'Shea wrote:

If you create a new backup job and point at the same source and destination repository, does it fail or succeed? If it succeeds, that might further indicate that the problem is with the backup files you have now.

Have you opened a support ticket with Veeam to better analyze the error logs?

No i haven’t.

My only source reference is a the Tape job.

I don’t havve more that those 3 jobs in that chained group that utilizes the RAID6 target.

Well if you are ok to start clean with a new backup chain then remove the older restore points for space.  Otherwise you will need to test them if you need anything to restore at all.  All up to how you want to move forward.

Tape jobs pull from the same source in parallel. It's not like I lost everything in vain.
Well, the test of the older restore points would most likely be something like "try to restore something if necessary." There probably won't be a health check for individual restore points, right?
How does it actually behave when restoring the old restore points if I now create a new chain with an active full?
Can I then switch back and forth between the new and the old, then broken chain in the Restore Manager, or is it only possible to restore it differently with considerable effort (if at all)?
I'm just trying to weigh things up, because no matter which route I take, an active full will be time-consuming.
Could it also be that the jobs have taken some damage, and in the end, it would be better to redo them?


Chris.Childerhose
Forum|alt.badge.img+21
Lolek Bolek wrote:
Chris.Childerhose wrote:
Lolek Bolek wrote:
Tommy O'Shea wrote:

If you create a new backup job and point at the same source and destination repository, does it fail or succeed? If it succeeds, that might further indicate that the problem is with the backup files you have now.

Have you opened a support ticket with Veeam to better analyze the error logs?

No i haven’t.

My only source reference is a the Tape job.

I don’t havve more that those 3 jobs in that chained group that utilizes the RAID6 target.

Well if you are ok to start clean with a new backup chain then remove the older restore points for space.  Otherwise you will need to test them if you need anything to restore at all.  All up to how you want to move forward.

Tape jobs pull from the same source in parallel. It's not like I lost everything in vain.
Well, the test of the older restore points would most likely be something like "try to restore something if necessary." There probably won't be a health check for individual restore points, right?
How does it actually behave when restoring the old restore points if I now create a new chain with an active full?
Can I then switch back and forth between the new and the old, then broken chain in the Restore Manager, or is it only possible to restore it differently with considerable effort (if at all)?
I'm just trying to weigh things up, because no matter which route I take, an active full will be time-consuming.
Could it also be that the jobs have taken some damage, and in the end, it would be better to redo them?

If you try to restore it will try to read the old chains.  You cannot swap between chains with a job as it will likely fail if you point it to the old chain so moving forward with the new one is best.

You can try to restore something or even just check the restore points on disk to see if Veeam can read them first.  If not then you know starting from scratch is the way to go.


  • Author
  • Comes here often
  • 13 comments
  • April 2, 2025
Chris.Childerhose wrote:
Lolek Bolek wrote:
Chris.Childerhose wrote:
Lolek Bolek wrote:
Tommy O'Shea wrote:

If you create a new backup job and point at the same source and destination repository, does it fail or succeed? If it succeeds, that might further indicate that the problem is with the backup files you have now.

Have you opened a support ticket with Veeam to better analyze the error logs?

No i haven’t.

My only source reference is a the Tape job.

I don’t havve more that those 3 jobs in that chained group that utilizes the RAID6 target.

Well if you are ok to start clean with a new backup chain then remove the older restore points for space.  Otherwise you will need to test them if you need anything to restore at all.  All up to how you want to move forward.

Tape jobs pull from the same source in parallel. It's not like I lost everything in vain.
Well, the test of the older restore points would most likely be something like "try to restore something if necessary." There probably won't be a health check for individual restore points, right?
How does it actually behave when restoring the old restore points if I now create a new chain with an active full?
Can I then switch back and forth between the new and the old, then broken chain in the Restore Manager, or is it only possible to restore it differently with considerable effort (if at all)?
I'm just trying to weigh things up, because no matter which route I take, an active full will be time-consuming.
Could it also be that the jobs have taken some damage, and in the end, it would be better to redo them?

If you try to restore it will try to read the old chains.  You cannot swap between chains with a job as it will likely fail if you point it to the old chain so moving forward with the new one is best.

You can try to restore something or even just check the restore points on disk to see if Veeam can read them first.  If not then you know starting from scratch is the way to go.

I just tried performing a file-level restore of all three jobs.
All jobs said backup metadata is not available.
So I don't need to carry around 3x 45 restore points anymore and will delete them on the RAID6 via Veeam.
I'll also recreate the jobs just to be on the safe side and then perform an active full.
Then I can be sure I'm not dragging anything old along.

By the way, Chris, thanks again for your tip back then about clicking on the status of the tape backup job to find out which tape it's currently waiting for. That was invaluable and helps me several times a week!

Thanks a lot again 😊


Comment