Based on what I can find for this error it seems related to the backup chain and possibly missing files. Maybe try checking to see if any incrementals are missing or maybe run an Active Full to start a new chain and see if it continues.
The other option noted was to create a Support ticket.
Based on what I can find for this error it seems related to the backup chain and possibly missing files. Maybe try checking to see if any incrementals are missing or maybe run an Active Full to start a new chain and see if it continues.
The other option noted was to create a Support ticket.
That’s what I was afraid of. I think you saw the same stuff I did. I was hoping to avoid a new backup chain on this. The weird thing is, when I look at the restore points in Veeam itself, I don’t see any orphaned at all. This is very strange and it just randomly started happening. No changes.
Based on what I can find for this error it seems related to the backup chain and possibly missing files. Maybe try checking to see if any incrementals are missing or maybe run an Active Full to start a new chain and see if it continues.
The other option noted was to create a Support ticket.
Is there any benefit to holding on to old restore points once a new chain has been created by the new full?
Is this a SOBR and the full backup files are written to another extent than the incremental files? If so, are all extents accessible?
Based on what I can find for this error it seems related to the backup chain and possibly missing files. Maybe try checking to see if any incrementals are missing or maybe run an Active Full to start a new chain and see if it continues.
The other option noted was to create a Support ticket.
Is there any benefit to holding on to old restore points once a new chain has been created by the new full?
Depends on if Veeam can read them or not - with missing files the chain may show corrupted anyway and not readable.
Based on what I can find for this error it seems related to the backup chain and possibly missing files. Maybe try checking to see if any incrementals are missing or maybe run an Active Full to start a new chain and see if it continues.
The other option noted was to create a Support ticket.
Is there any benefit to holding on to old restore points once a new chain has been created by the new full?
Depends on if Veeam can read them or not - with missing files the chain may show corrupted anyway and not readable.
Usually missing ones will show up grayed out in the job’s properties, correct? I didn’t see any when I looked through it.
Is this a SOBR and the full backup files are written to another extent than the incremental files? If so, are all extents accessible?
No SOBR on this one. Standard run of the mill backup to local repository.
Based on what I can find for this error it seems related to the backup chain and possibly missing files. Maybe try checking to see if any incrementals are missing or maybe run an Active Full to start a new chain and see if it continues.
The other option noted was to create a Support ticket.
Is there any benefit to holding on to old restore points once a new chain has been created by the new full?
For retention yes.
This to me sounds like either someone has deleted older chains and the VBM (metadata) is still referencing this, causing issues. Alternatively you’re using forever forward or reverse incremental and the chain is broken.
At this point consider the chain corrupted and a new active full is necessary.
Read this to see if you’ve got restore points missing that exist in the database: https://helpcenter.veeam.com/docs/backup/vsphere/remove_missing_point.html?zoom_highlight=View+restore+points&ver=110
If that’s the case you should be fine to forget the missing ones and move on without a new active full.
Active full still had the same struggle. I checked the restore points and they’re all there. There is objectively no reason for this to be happening.
If you dive into the backup job logs, can you confirm if the storage is being picked up by the proxy server. I had a weird issue a while ago whereby a server was given the wrong DNS suffix on registration and half the servers could talk to it whether they auto appended a suffix or not.
Actually had an issue today whereby mount server wasn’t reachable because of this exact same DNS suffix issue
If you dive into the backup job logs, can you confirm if the storage is being picked up by the proxy server. I had a weird issue a while ago whereby a server was given the wrong DNS suffix on registration and half the servers could talk to it whether they auto appended a suffix or not.
Actually had an issue today whereby mount server wasn’t reachable because of this exact same DNS suffix issue
Which logs would you check specifically?
Go here - C:\ProgramData\Veeam\Backup -- then check the actual job folder - name of folder is same as job name. The logs in there should point you to this.
I managed to resolve the issue. I ultimately had to get rid of all of my restore points (I told Veeam to delete from disk), and run a new full. I tried running a new full to start a new chain before doing this, but still ran into the same issue. Something must have gotten corrupted in the chain somewhere. I wanted to update everyone and hopefully in the future this thread may help someone.
I managed to resolve the issue. I ultimately had to get rid of all of my restore points (I told Veeam to delete from disk), and run a new full. I tried running a new full to start a new chain before doing this, but still ran into the same issue. Something must have gotten corrupted in the chain somewhere. I wanted to update everyone and hopefully in the future this thread may help someone.
Glad to hear you got things worked out. So, in the end it was storage but the chains.
Hi,
I have the same error but in one job where there are 8 backup from different machines.
All backups have no orphaned restore points. Repository is good.
Perhaps when the jobs starts and try to complete the backup of one machine I have the error:
03/05/2023 13:33:13 :: Failed to pre-process the job Error: Full storage not found
03/05/2023 13:33:13 :: Failed to preprocess target Error: Full storage not found
The job tries to do it 6 times.
As I say, this job has 6 virtual machine and only one machine is what causes the error.
Hi,
I have the same error but in one job where there are 8 backup from different machines.
All backups have no orphaned restore points. Repository is good.
Perhaps when the jobs starts and try to complete the backup of one machine I have the error:
03/05/2023 13:33:13 :: Failed to pre-process the job Error: Full storage not found
03/05/2023 13:33:13 :: Failed to preprocess target Error: Full storage not found
The job tries to do it 6 times.
As I say, this job has 6 virtual machine and only one machine is what causes the error.
So to me it sounds like a space issue if this is reported many times. If there are orphaned restore points try cleaning those up and see.
Hi Chris,
Thank you for your answer.
The are not orphaned restore points in any virtual machine.
For this machine, call us “lionX” only has the .vib until 29/04/2023 perhaps, the other virtual machine on the same job has its backup with date of today.
Space? I don't think so. All machine need 896 GB and the target hast 935 GB.
Thanks for you help!