Solved

What does "Warning: some restore points were not processed" mean?


Userlevel 7
Badge +3

What does “Warning: some restore points failed to process” mean? I got this warning on a cloud copyjob and I can’t really find out what specifically it means or why it happened? Does this mean the copyjob failed or in a disaster recovery scenario that I’d have missing data that I couldn’t restore to from the cloud because it didn’t copy over?

 

I ran a new sync and it was successful, but I am wondering what this warning means and how can I present it in the future?

icon

Best answer by MicoolPaul 26 March 2021, 19:57

View original

10 comments

Userlevel 7
Badge +20

Hey, this normally means that your time set for syncing within expired before some VMs were copied. If this was an incremental backup copy then it was likely just a transient issue of bandwidth or too large a delta of blocks changed between backups to shift off in time. Depending on your WAN speed it may be worth looking at WAN acceleration if your supplier supports this.

Userlevel 7
Badge +3

Hey, this normally means that your time set for syncing within expired before some VMs were copied. If this was an incremental backup copy then it was likely just a transient issue of bandwidth or too large a delta of blocks changed between backups to shift off in time. Depending on your WAN speed it may be worth looking at WAN acceleration if your supplier supports this.

I was thinking it may have been a time expiration issue, but I looked at the scheduling and it’s set to continuous run and to run whenever new restore points are available. After I posted this, I had failure notices come in with the same message. It went from a warning to a failure. I have seen errors that said “some vms failed to process during copy interval.” but this one just said “Some restore points were not processed.”

 

It’s very strange. I was able to get a successful “Sync Now” but I am just wondering what happened and what can I do to prevent it?

 

Thanks so much for your answer!

Userlevel 7
Badge +20

Did all the VMs you backup have a new restore point created? If nothing new was created during the cycle (of which you’re triggering a new one when you sync now) then that would generate the message also

Userlevel 7
Badge +17

@bp4JC , you say the job is running continously. Is it possible that the VMs with the warning do not have a new restore point since the last run of the copy job.

I have seen this with continous tape copy jobs. The job gives warnings because there are two jobs included with a weekly schedule.

Userlevel 7
Badge +3

Did all the VMs you backup have a new restore point created? If nothing new was created during the cycle (of which you’re triggering a new one when you sync now) then that would generate the message also

I believe so. They’re all part of 1 job that runs every couple of hours. The failure that occurred afteward make sense now that you ask that. When I ran the sync now, I think backups were happening and I didn’t realize it when I ran it. That may have been why the failure appeared. But when the warning initially occurred, all VMs had already backed up successfully (all as part of a single backup job) and that copyjob was running to move those latest restore points over.

 

My understanding is that a warning does not indicate a failure and if I am understanding correctly, even if I had had a disaster recovery scenario, my data would have still be offsite?

Userlevel 7
Badge +3

@bp4JC, you say the job is running continously. Is it possible that the VMs with the warning do not have a new restore point since the last run of the copy job.

I have seen this with continous tape copy jobs. The job gives warnings because there are two jobs included with a weekly schedule.

Hey, JMeixner!

 

There should have been new restore points for all of the VMs. They are all part of the same backup job that runs about every two hours. The copyjob is set to kick off whenever new restore points appear.

 

As I type this, the thought occurs to me: is it possible that the copyjob started and finished, but there were still VMs backing up in the backup job and so the copyjob threw a warning because it didn’t copy over everything due to some vms still backing up? With it set to continuous/whenever new restore points appear, is this even possible?

Userlevel 7
Badge +20

@bp4JC, you say the job is running continously. Is it possible that the VMs with the warning do not have a new restore point since the last run of the copy job.

I have seen this with continous tape copy jobs. The job gives warnings because there are two jobs included with a weekly schedule.

Hey, JMeixner!

 

There should have been new restore points for all of the VMs. They are all part of the same backup job that runs about every two hours. The copyjob is set to kick off whenever new restore points appear.

 

As I type this, the thought occurs to me: is it possible that the copyjob started and finished, but there were still VMs backing up in the backup job and so the copyjob threw a warning because it didn’t copy over everything due to some vms still backing up? With it set to continuous/whenever new restore points appear, is this even possible?

Yep, if you run a report for your backup job then compare the times that should give you an understanding of what’s happening, also check the logs for the backup copy job that should give more detail what has happened.

 

Any existing backup copies will still be accessible, if you go to the main “home” screen then browse to backups > backup (copy) it will show you all the Backup Copy Jobs, the VMs within them, their health status and how many restore points there are available

Userlevel 7
Badge +3

@bp4JC, you say the job is running continously. Is it possible that the VMs with the warning do not have a new restore point since the last run of the copy job.

I have seen this with continous tape copy jobs. The job gives warnings because there are two jobs included with a weekly schedule.

Hey, JMeixner!

 

There should have been new restore points for all of the VMs. They are all part of the same backup job that runs about every two hours. The copyjob is set to kick off whenever new restore points appear.

 

As I type this, the thought occurs to me: is it possible that the copyjob started and finished, but there were still VMs backing up in the backup job and so the copyjob threw a warning because it didn’t copy over everything due to some vms still backing up? With it set to continuous/whenever new restore points appear, is this even possible?

Yep, if you run a report for your backup job then compare the times that should give you an understanding of what’s happening, also check the logs for the backup copy job that should give more detail what has happened.

 

Any existing backup copies will still be accessible, if you go to the main “home” screen then browse to backups > backup (copy) it will show you all the Backup Copy Jobs, the VMs within them, their health status and how many restore points there are available

I took a look at the html report when I saw the warning and it didn’t really show me anything other than the copyjob finished with a warning. The backup job itself finished 100% successfully.

 

What’s confusing is, if the copyjob is set to kick off any time there is a new restore point, why would it have stopped early or not kicked off again when any straggling VMs finsihed backing up? Unfortunately, in terms of the report, I am only able to see a report for the last copyjob that ran, which in this case, was successful.

Userlevel 7
Badge +20

@bp4JC, you say the job is running continously. Is it possible that the VMs with the warning do not have a new restore point since the last run of the copy job.

I have seen this with continous tape copy jobs. The job gives warnings because there are two jobs included with a weekly schedule.

Hey, JMeixner!

 

There should have been new restore points for all of the VMs. They are all part of the same backup job that runs about every two hours. The copyjob is set to kick off whenever new restore points appear.

 

As I type this, the thought occurs to me: is it possible that the copyjob started and finished, but there were still VMs backing up in the backup job and so the copyjob threw a warning because it didn’t copy over everything due to some vms still backing up? With it set to continuous/whenever new restore points appear, is this even possible?

Yep, if you run a report for your backup job then compare the times that should give you an understanding of what’s happening, also check the logs for the backup copy job that should give more detail what has happened.

 

Any existing backup copies will still be accessible, if you go to the main “home” screen then browse to backups > backup (copy) it will show you all the Backup Copy Jobs, the VMs within them, their health status and how many restore points there are available

I took a look at the html report when I saw the warning and it didn’t really show me anything other than the copyjob finished with a warning. The backup job itself finished 100% successfully.

 

What’s confusing is, if the copyjob is set to kick off any time there is a new restore point, why would it have stopped early or not kicked off again when any straggling VMs finsihed backing up? Unfortunately, in terms of the report, I am only able to see a report for the last copyjob that ran, which in this case, was successful.

Hey @bp4JC  sorry I meant run a report of the backup job not the backup copy job :)

 

can you grab the logs of the backup copy job as well to see what is happening? by default it’s C:\programdata\veeam\backup\<name of backup copy job>

Userlevel 7
Badge +22

Someone might have mentioned this already above but just in case. If you are using the old method backup copy jobs then if the copy interval is set a value too low then every time the new interval starts the backup copy job will begin at the start of the vm list. So any big vms towards the end will never make it over. You can move the queue around, so move vms in the job up to the beginning. 

 

Also if the backup job runs every 2 hours I believe the copy job, or at least the vms that are currently being backed up will be locked and can’t be processed until the vms in the backup jobs restore points have been released from the job. 

If this is the new method (immediate) then I would thing the problem is again that the source job is running too often. One way I solved that issue with a cloud connect customer was not to use a backup copy job for the offsite but simply another backup job to the cloud. That way when it started it would lock the vms and allow itself to finish before the other job could kick in. 

I hope I did not miss something our am not totally misunderstanding what is happening :).

 

As someone said the proof will be looking in backups and at what restore points you actually have. If you right click on the name in the of the job in backups Disk/Cloud it should show you a list and you can see if any are incomplete.

 

cheers

Comment