Skip to main content

Last week I had to troubleshoot a tape (GFS) backup problem. I guess the solution could be interesting for others too.

I this environment, synthetic full and tape backup are scheduled at the same day. Full prior to tape jobs. Normally we do see any problems with this configuration. But last week we saw this error in the tape job:

12.01.2022 07:04:45 :: Error: Storage ('c97b8ddb-a9dc-4b49-b441-dc1031100a58', CreationTime '08.01.2022 23:15:25') not found 

As a result the whole job failed.

 

After taking a look at the logs, I found out, the related storage object was a VIB-file of one of the backup jobs, tape job had to bring to tape.

d08.01.2022 23:45:07] <01> Info         ÂCTapeVmTaskBuilder] Storage c97b8ddb-a9dc-4b49-b441-dc1031100a58, mediaset: Weekly, path: servername.vm-187205D2022-01-08T231525_8F3A.vib, creation time: 08.01.2022 23:15:25, is synthetic: True, is synthetic full: True

Although the disk backup job was running at the time, the file was not there at all. Why? The reason was quite simple: At this day a synthetic full was scheduled. So VBR creates a VIB-file that is replaced by a VBK-file at the time of merging. Therefore the error in the tape job: Tape job starts and searches for backup files to bring to tape. Here the restore point of the VIB-file was selected. But the backup job has not performed the merge by this time. Later, when tape job wanted to access the VIB-file, it was replaced by the VBK-file - because of merging for synthetic full. In our case, disk backup job needed a re-try and therefore the merge-process was much later than normally.

When you suffer from this error more often, you could try to create some time buffer between full and tape job. Another solution could be to configure tape job to use already existing restore points instead of waiting for new respectively currently running ones.

The only thing I cannot answer is: why does the whole job fails when just one file of one backup job was missing?

Interesting failure… I have not seen this behavior up to now.

Our weekly backups are copies to tape at the same day as they are created on disk. Perhaps our jobs are small enough to finish before the tape job runs.

I will have a look at this in the next time….


Interesting failure… I have not seen this behavior up to now.

Our weekly backups are copies to tape at the same day as they are created on disk. Perhaps our jobs are small enough to finish before the tape job runs.

I will have a look at this in the next time….

do you run synthetic full on these days? If not, you will not see this error. But it is a good practice to run fulls before tape-out. Or maybe you had never re-tries or other problems that lead to a later merge process. 


Yes, we do synthetic fulls at these days. We copy to tape at each day for these specific data.

But the disk repository is a ReFS formatted disk, so the merging process is rather fast...


Yes, we do synthetic fulls at these days. We copy to tape at each day for these specific data.

But the disk repository is a ReFS formatted disk, so the merging process is rather fast...

We also used ReFS. The time it takes to merge is not the problem. Just the later beginning of the process.


Ok, in this environment the VM backups run very smooth and without any problems.

We have much more problems getting the RMAN data to tape as this is working as file-to-tape only… it works, but this is a humongous number of files….


@vNote42, did you intentionally publish it in Veeam Legends group? Just checking in if you would like to have it in Blogs/Discussions section :blush:


Yeah all our tape jobs have that option enabled by default to ensure that the jobs complete at a decent time.  Interesting error though on this one.


@vNote42, did you intentionally publish it in Veeam Legends group? Just checking in if you would like to have it in Blogs/Discussions section :blush:

Hi @Kseniya! Thanks for asking! I actually did not intend that! Could you do your magic and move it to the public space? 

… it seems I am a little out of shape :joy:


Absolutely, @vNote42  :blush: Happy to help and no worries!


Absolutely, @vNote42  :blush: Happy to help and no worries!

Thanks for your incredible support!


Comment