Skip to main content

So I have an issue. We backup to disk (a Data Domain, in our case), and from there, copy to tape. So I have 2 jobs - a backup job that writes to the DD, and a tape job (set as secondary target). Backup job is set to make an active full once a week (usually on a Friday), and the other days are incrementals. Pretty standard so far.

My issue: on the weekend, we ran out of blank tapes in the library. I had to cancel the running tape job (which was trying to write out the Friday FULL backup). I got some new blanks, and loaded them. But the Monday tape job went to write out the (missed) Friday FULL backup, instead of just doing the Monday incremental. 

The FULL job is like 4 TB, no way it will finish in time before the next night’s incremental backup will start up. I had to change the scheduled start to “Not scheduled”, just so it would have enough time to finish (else it would have been interrupted by that evening’s incremental job). I don’t want to hold the source backup job - I want it to write out the incremental to disk for that night, as it should, I don’t want it to miss it’s scheduled start time. I want the tape backup to not go back and re-try the missed tape writing.

So how can I set the tape job to not do this - i.e., not go back and attempt to write out a previous missed backup? I only want it to do whatever that day’s scheduled backup job is (incremental). I don’t see any way to do that.

I hope I explained that clearly enough. Thanks for any insight.

UPDATE:

I heard back from Support. Apparently, if your backup job has only 1 VM (as mine does), then Veeam will not utilize parallel processing (i.e., write to multiple tape drives at once). So I might change my job to backup a 2nd, un-related VM, just so that when the job writes to tape, it will utilize multiple tape drives at once.

The reason for this is because you are currently backing up a single backup job with a single VM in it (at least from what I can tell).  In VBR, one VM will not be written to multiple drives asynchronously. If you had multiple jobs or multiple tasks/VMs within a single job, then you would be able to utilize parallel processing the way you are trying to use it, but sadly, that doesn’t seem to be your current setup.


This is the reason why I asked if you have per VM backup files activated 😎

Ok, when you have one VM in this job only, then one tape drive will be utilized.

And to include a second, unrelated VM into this job will not help, because the amount of data for your actual VM will not decrease and this will be handled by one tape drive even if you add several VMs.

The only solution I see for this is to use a newer and more powerful tape drive. Fir example a LTO-6 drive has atransfer speed of 150MB/sec, a LTO-9 drive has a transfer speed of 400MB/sec.


These are LTO-7 drives, although I had been using LTO-6 tapes (I finally have an order for LTO-7 tapes coming, now that we’ve utilized our stock LTO-6 tapes). And support does say “If you had multiple jobs or multiple tasks/VMs within a single job, then you would be able to utilize parallel processing the way you are trying to use it”, which seems to disagree with what you say. I take that statement to indicate that the important thing is not the amount of data, but the number of VMs in the job. Now, maybe it means that each VM will go to a different tape drive, I dunno, we’ll find out. Still waiting for the shipment of new tapes to come in ….


In my environments each VM can be handled by another drive, but not backup files from one VM.

It is possible that there is a change in behavior with V12.1, I am on V12.0 with my tape environments at the moment….

And ok, LTO-7 has a transfer speed of 300MB/sec, but LTO-9 is still 50% faster 😎

 

BTW, I would interpret the statement from support the same way I said it.


These are LTO-7 drives, although I had been using LTO-6 tapes (I finally have an order for LTO-7 tapes coming, now that we’ve utilized our stock LTO-6 tapes). And support does say “If you had multiple jobs or multiple tasks/VMs within a single job, then you would be able to utilize parallel processing the way you are trying to use it”, which seems to disagree with what you say. I take that statement to indicate that the important thing is not the amount of data, but the number of VMs in the job. Now, maybe it means that each VM will go to a different tape drive, I dunno, we’ll find out. Still waiting for the shipment of new tapes to come in ….

That is what the statement you got means the more VMs in a job the more tape drives it will use regardless of data size.


As I say, we’ll see. What I want is for the job (total 4TB) to be written to multiple drives at once. Not 1 VM to 1 drive, the other VM to another drive, so that almost all 4TB is only written by 1 drive if 1 VM is larger than another. 

I have to use what I have ….


Please try it and tell us the results. 😎👍🏼


Comment