Full Storage Not Found - How I resolved this error
I wanted to create this in the hopes that in the future, someone who is experiencing this error will find this thread and see a solution. I randomly had a backup start throwing the below error:
Failed to pre-process the job Error: One or more errors occurred. Unable to apply retention policy: failed to delete backup files Error: Full storage not found
Through some Google searching, all I could really find was to run an Active Full and start a new backup chain. I tried this, but the error still persisted. At this point, I did the only other thing I knew to do, so I told Veeam to delete the restore points for that job from disk, and I then ran a new active full. I essentially started over with a fresh dataset. I was able to get a successful backup after this.
In conclusion: if you ever run into this issue, try running a new active full. If this does not resolve your issue, go to your restore points (the Disk section in VBR), and tell Veeam to delete from disk all of those restore points for the failing job. This should resolve your issue.
Page 1 / 1
Also, to add, SureBackup is worth implementing as it should be able to pick up when the chain has corruption creeping in.
Hey, Before deleting backup data that may still be potentially usable and ending up without anything, which would be a risk, I would recommend opening a ticket with Veeam support to identify the cause. To avoid ending up without a backup, I would duplicate the problematic job and start a new backup chain so that the "corrupted" data can still be used.
Also to avoid corrupt data you could use as mentionned the Storage-Level corruption guard or use integrity check in the Surebackup job.
Hey all,
While the OP strategy will work, this happens typically because of some issues within the database; the most common reason is that during retention, something breaks the SQL or VBR (or both) servers mid operation and we end up with a situation in the DB that shouldn’t exist: incremental backups that don’t have a full backup at the start of their chain.
There are a few other situations with such linking problems, but it’s not worth listing them.
The good news is that usually this is a pretty fast fix with a Support Case. Open a case, export logs as per https://veeam.com/kb1832 and use the 1st radio option and select the affected job(s). You can ctrl and/or shift + click to select multiple jobs.
You will also want to add a copy of your Veeam Configuration Database if your org policies allow exporting of that data: https://www.veeam.com/kb1471
Mention that you read on the forums it _might_ be related to issue 377509; your Engineer should be able to find appropriate steps. This is a general fix that can allow retention to clear those invalid links in the DB, but it’s not a panacea for all instances when you might see this, so just be prepared that we may need more research in some cases, but typically this can be solved without losing backups.
But if you’re okay just starting a new Backup Set, then just follow the opening post, Remove from Configuration for the affected backup, and run the job (prepare for an Active Full!)
Hello @bp4JC
Did you configured Backup job maintenance settings:
Great to hear you solved this and sharing to the community.
I ran into the same issue.. had the same result. The chain gets corrupted and causes that error. If you haven’t done so, in the job / storage / advanced settings, make sure to enable Storage-level corruption guard. It will scan the storage and fix minor things that could turn into major things. It won’t fix one that’s already gone bad though.
I think this issue is caused by your object lock settings. If you set your repo to lock for 30 days and your retention is lower than this, it can't delete the files, resulting in this error.
I think....
I rarely run Active or Synthetics...I just have Checksums and Health Checks configured to run each month and thankfully never experienced issues on Storage. Hopefully it continues. Glad you got it sorted.
Hello, i have the same error message on one backup jobs and two BCJ but i still have a support case opened. V12 Last CP;
Only concerned 10% of VMs in the jobs
Rescan SOBR without success
Can’t run a new activefull
one week after upgrade to the new format
restore test are ok before the error appeared
I will share to the community the solution when I have solved the problem.
I got today the same issue, all repos are hardened (Linux XFS), I have now around 10 backup jobs where some machines (not all) are failing with the same error.
I scanned again all the involved repositories and NO missing restore points, I haven’t any red X on them, also an active full backup is not fixing the problem.
Opened a P2 incident in Veeam support, waiting for hints.
When you get a resolution from Support, I recommend sharing their solution here @A.Venturi , if you don’t mind….will help others out who have the issue.
When you get a resolution from Support, I recommend sharing their solution here @A.Venturi , if you don’t mind….will help others out who have the issue.
Sure :)
I get this message every few weeks. Have to delete my entire backup chain and start again.
Every time.
I get this message every few weeks. Have to delete my entire backup chain and start again.
Every time.
I’d suggest you take a look at this with support then or review with a Veeam Partner (assuming you’re an end customer), that’s not a normal error message and could indicate issues within your environment. You don’t want this error when you need to perform a recovery...
I’m using the community version and don’t appear to have any access to support… I’ve opened cases in the past but they’re just ignored and then closed.
It’s just one of the many, many errors I get seemingly at random.
My favourite is when a backup job stops working because the repository is somehow out of sync - that happens every few weeks too, and requires that I press a button to rescan the repository. Surely the software could just run that task automatically upon discovering a repository is out of sync?!
yep that’s correct, you get best effort access to support when Veeam Support have spare capacity.
I’d suggest, make a new topic here, detailing your environment, and the problems you’ve got, and then we can get into it properly, it will also help get fresh eyes on your issues vs this old topic.
We ran into this issue today. Another way to resolve this is to look for the oldest retention point and manually delete it.
Could you check if some errors, is there an orphaned vib when you browse your jobs and sort it by date?
Unfortunately, if you are in the same boat as me. You will probably need to remove orphaned vib by cli, then rescan sobr and forget all unavaiable backups...
yep that’s correct, you get best effort access to support when Veeam Support have spare capacity.
I’d suggest, make a new topic here, detailing your environment, and the problems you’ve got, and then we can get into it properly, it will also help get fresh eyes on your issues vs this old topic.
I agree here. List your setup and configuration in a new topic so we can help further as you should not have random problems like this with Veeam.
Had this start popping up on one of my backup chains. The “Upgrade Backup Chain Format” option fixed this for me (backup console → Home → Backups → Disk → Select job that has error. Option appears in the top banner if available). Worth checking if this is available before blowing away backups and starting a fresh full backup.