Skip to main content

Source backup job that runs daily (1:00AM to 4:00AM (3hrs long)) is 1 TB in size on Mondays and then short 100GB increments (1:00AM to 1:30AM) each day for the rest of the week.

My WAN link to the Offsite can copy 500GB per day.

Copy backup will transfer about 500 GB on Monday but automatically stops after Source job runs again -

“Failed to process NAS backup copy task Error: Stopped by job SourceX”

After Source job finishes, the Copy job automatically starts again. It copies another 500 GB and gets interrupted when the Source job runs again.

 

My question is the copy job restarting each time or resuming from where it got interrupted? 

Today is Saturday and it hasn’t finished copy the large Monday restore point..

  

It should be resuming from where it left off. The other option if you have not already is schedule the copy job to run when the main job finishes in the scheduling section.  This way if it finishes early the copy can run longer.


I was hoping the copy job would be resuming but thus far I’ve calculated its transferred more data than the actual size of the restore point being copied. Also, the he copy backup ‘s folder is 3 times larger in size than the source’s backup folder. I’ll wait until the end of today to give an update.

 


I was hoping the copy job would be resuming but thus far I’ve calculated its transferred more data than the actual size of the restore point being copied. Also, the he copy backup ‘s folder is 3 times larger in size than the source’s backup folder. I’ll wait until the end of today to give an update.

 

It is possible that it starts over due to being interrupted by the main job. So that is where scheduling gets tricky.


I’m wondering if I should switch to agent based backup. This way, only changed block increments would be sent by the copy job (rather than sending all new and modified files). The copy job would then just run synthetic fulls periodically to manage its chain.


I’m wondering if I should switch to agent based backup. This way, only changed block increments would be sent by the copy job (rather than sending all new and modified files). The copy job would then just run synthetic fulls periodically to manage its chain.

You could try that and see what happens.  Worth a shot.


Hi @Arin -

Yes, the BCJ does indeed resume as Chris shared. See below what the Guide says about this:
 

BCJ Resume

The reason there may be more data after resume is there may have been new data which was written in the source Backup Repo. The Guide doesn’t really specify BCJs resume where it left off or not (i.e. at a point where there wasn’t new data in the source Repo). Support may be able to share if that is the case or not.

As far as what does get sent, first off, BCJs copy data in the Forever Fwd Incremental method. The first run is a Full, either the latest (full) file if the source uses Reverse Incremental, or the latest Full and subsequent Increments if the source uses FFwd or Fwd. Subsequent BCJ runs sends only increment changes from the source Backup Repo. If, however, you have GFS configured, then the BCJ uses the Fwd Incremental method.

Hope this helps. Copy jobs can be a bit confusing...at least I think so 🙂


So more than a week later and the restore point from last monday is still being copied..

I came across the below from other users who share the same experience..

https://forums.veeam.com/file-shares-and-object-storage-f57/question-around-a-secondary-copy-t72934.html


https://forums.veeam.com/veeam-backup-replication-f2/backup-copy-resetting-during-each-night-s-source-run-t81413.html

The exception is the first copy which is resumeable I found to be true. All subsequent copies need to complete before another source restore point is created else it will restart from scratch..

 Copy jobs for Agent based backups are different than NAS file share backups it seems- NAS file share backups run continuously and automatically kick off as soon as new restore points are created. This is fine for me. However, it seems that once the copy job gets interrupted as in a new source restore point created, it will start over.


That’s very interesting to hear @Arin , that the job would start from scratch. Appreciate you shariing the additional info.


Also, It may be worth sharing feedback with the technical writers team at Veeam to add that relevant info to the Guide Arin. At the link I shared above, if you scroll down to the bottom, there is a “send feedback” link to suggest additional info be added to the Guide. I would try to do that, or at least submit feedback to Support.


Yes, it would be useful for Veeam to actually include in the copy job log just how much % or GB is left to be copied. That would be a huge benefit.


I managed to get it sorted out by disabling the source job for a few days to allow the copy to go through uninterrupted. The 1 TB data gets created only once per week on the file share, so I decided going forward that it would be best to create a Separate source backup job that runs Once a week on that specific large folder and this would give a week’s time to allow the copy to occur.

I currently use NAS file share backup jobs but it would have been interesting to know if the agent based backup copy job would have been able to resume (since they are copied block level). I’m just short on time to test that! 🙂 thanks all.


I managed to get it sorted out by disabling the source job for a few days to allow the copy to go through uninterrupted. The 1 TB data gets created only once per week on the file share, so I decided going forward that it would be best to create a Separate source backup job that runs Once a week on that specific large folder and this would give a week’s time to allow the copy to occur.

I currently use NAS file share backup jobs but it would have been interesting to know if the agent based backup copy job would have been able to resume (since they are copied block level). I’m just short on time to test that! 🙂 thanks all.

Great to hear and something that should have been suggested.  Glad you caught it and nice to see things in sync now. 👍🏼


Nice to hear you found something which works for you Arin. 


Comment