So I have an issue. We backup to disk (a Data Domain, in our case), and from there, copy to tape. So I have 2 jobs - a backup job that writes to the DD, and a tape job (set as secondary target). Backup job is set to make an active full once a week (usually on a Friday), and the other days are incrementals. Pretty standard so far.
My issue: on the weekend, we ran out of blank tapes in the library. I had to cancel the running tape job (which was trying to write out the Friday FULL backup). I got some new blanks, and loaded them. But the Monday tape job went to write out the (missed) Friday FULL backup, instead of just doing the Monday incremental.
The FULL job is like 4 TB, no way it will finish in time before the next night’s incremental backup will start up. I had to change the scheduled start to “Not scheduled”, just so it would have enough time to finish (else it would have been interrupted by that evening’s incremental job). I don’t want to hold the source backup job - I want it to write out the incremental to disk for that night, as it should, I don’t want it to miss it’s scheduled start time. I want the tape backup to not go back and re-try the missed tape writing.
So how can I set the tape job to not do this - i.e., not go back and attempt to write out a previous missed backup? I only want it to do whatever that day’s scheduled backup job is (incremental). I don’t see any way to do that.
I hope I explained that clearly enough. Thanks for any insight.
Page 1 / 2
How do you have the tape job configured? GFS? From what I know Veeam does this to ensure that a full backup chain makes it to the tape, and you don’t have a broken one. So, this is why Veeam is processing the full backup to get that to tape and will then proceed to the incrementals if you have those checked in the tape policy.
How do you have the tape job configured? GFS? From what I know Veeam does this to ensure that a full backup chain makes it to the tape, and you don’t have a broken one. So, this is why Veeam is processing the full backup to get that to tape and will then proceed to the incrementals if you have those checked in the tape policy.
No GFS (actually, I’ve never used a “real” GFS scheme in 30 years of doing backups … always done just saving EOM and EOY tape backups, all others recalled and overwritten after 60 days).
Actually, due to legal constraints, at the moment we do not overwrite *any* removable media. So no old backups are ever recalled from offsite and overwritten and re-used. Just sent offsite ...
So there’s no way to accomplish this? With our old program, what got written to tape was whatever backup was performed in the specific tape job time period (i.e., within the last 24 hours). If I wanted an earlier backup (that might have been missed) sent to tape, I had to set a specific job that wrote that specific saveset (media set).
How do you have the tape job configured? GFS? From what I know Veeam does this to ensure that a full backup chain makes it to the tape, and you don’t have a broken one. So, this is why Veeam is processing the full backup to get that to tape and will then proceed to the incrementals if you have those checked in the tape policy.
No GFS (actually, I’ve never used a “real” GFS scheme in 30 years of doing backups … always done just saving EOM and EOY tape backups, all others recalled and overwritten after 60 days).
Actually, due to legal constraints, at the moment we do not overwrite *any* removable media. So no old backups are ever recalled from offsite and overwritten and re-used. Just sent offsite ...
So there’s no way to accomplish this? With our old program, what got written to tape was whatever backup was performed in the specific tape job time period (i.e., within the last 24 hours). If I wanted an earlier backup (that might have been missed) sent to tape, I had to set a specific job that wrote that specific saveset (media set).
This is just how the Veeam to Tape jobs work. There is nothing you can control on what writes out to tape per day/schedule.
So what happens if it misses a FULL backup, as happened here? It will just try the next scheduled execution time, even though it will never finish the job before the next backup job starts? I have backup jobs that can get kinda large (like 7-10 TB), and no way would they finish before the next scheduled weekday execution time, if they don’t finish on the weekend. How do you get around that?
So what happens if it misses a FULL backup, as happened here? It will just try the next scheduled execution time, even though it will never finish the job before the next backup job starts? I have backup jobs that can get kinda large (like 7-10 TB), and no way would they finish before the next scheduled weekday execution time, if they don’t finish on the weekend. How do you get around that?
There is no way around that unfortunately as that is how the tape service runs. One thing around it would be to have another backup location as going to DD for primary is not a best practice and reading from that to tape is what can take much longer time. Block storage is the better option here, but it depends on what you have available. Part of the bottleneck is the DD unfortunately and the way Veeam synthesizes to tape.
So what happens if it misses a FULL backup, as happened here? It will just try the next scheduled execution time, even though it will never finish the job before the next backup job starts? I have backup jobs that can get kinda large (like 7-10 TB), and no way would they finish before the next scheduled weekday execution time, if they don’t finish on the weekend. How do you get around that?
Block storage is the better option here, but it depends on what you have available. Part of the bottleneck is the DD unfortunately and the way Veeam synthesizes to tape.
The DD is what I have for storage. Been using one for like 12 years here, first with EMC Networker and now with Veeam (as we are transitioning). Even with other storage, writing 10TB to tape in 1 day is probably not really feasible, unless maybe you had multiple LTO-8 drives and 10G fiber ethernet (maybe) ..
Veeam has to make sure that the complete backup chain is written to tape. You cannot restore from the incrementals when then full from last Friday is missing.
So, you have to let the job finish. And yes, this can cause a missed incremental or an incremental at another tome of the day.
Do you have one tape drive only? If you have more, you can try to configure the tape job to use multiple drive in parallel to get the data faster to that tapes.
Veeam has to make sure that the complete backup chain is written to tape. You cannot restore from the incrementals when then full from last Friday is missing.
So, you have to let the job finish. And yes, this can cause a missed incremental or an incremental at another tome of the day.
Do you have one tape drive only? If you have more, you can try to configure the tape job to use multiple drive in parallel to get the data faster to that tapes.
I have 3 tape drives. And one media pool, which is set for parallel processing to use up to 3 drives. The default for the tape jobs looks to be set to 2 (it’s unchecked, but the value shows 2). I could set it to 3.
Even so, not being able to skip sending a specific full to tape complicates things. I may open a ticket and ask support if there’s a way. Otherwise, if I miss one weekend, it will NEVER have enough time to catch up. And that can’t be right, not for an enterprise level program …
Thanks
When it’s unchecked, the setting is not active...
Veeam has to make sure that the complete backup chain is written to tape. You cannot restore from the incrementals when then full from last Friday is missing.
So, you have to let the job finish. And yes, this can cause a missed incremental or an incremental at another tome of the day.
Do you have one tape drive only? If you have more, you can try to configure the tape job to use multiple drive in parallel to get the data faster to that tapes.
I have 3 tape drives. And one media pool, which is set for parallel processing to use up to 3 drives. The default for the tape jobs looks to be set to 2 (it’s unchecked, but the value shows 2). I could set it to 3.
Even so, not being able to skip sending a specific full to tape complicates things. I may open a ticket and ask support if there’s a way. Otherwise, if I miss one weekend, it will NEVER have enough time to catch up. And that can’t be right, not for an enterprise level program …
Thanks
Tick off the box and set the job to use all 3 drives. Hopefully that helps get things over faster to complete for you.
I have 3 tape drives. And one media pool, which is set for parallel processing to use up to 3 drives. The default for the tape jobs looks to be set to 2 (it’s unchecked, but the value shows 2). I could set it to 3.
Even so, not being able to skip sending a specific full to tape complicates things. I may open a ticket and ask support if there’s a way. Otherwise, if I miss one weekend, it will NEVER have enough time to catch up. And that can’t be right, not for an enterprise level program …
Thanks
Tick off the box and set the job to use all 3 drives. Hopefully that helps get things over faster to complete for you.
Thanks, I’ll try that. Looks like the order for new tapes won’t come be delivered in time for this weekend, so I will probably have the same issue again ….
I am also considering enabling jumbo Ethernet frames for the Veeam server and proxies (it’s already enabled for the DD). I dunno if it will help, but I suppose it can’t hurt …
I did check off the box to use all 3 drives on 1 job, that does have a 4TB FULL backup and a couple incrementals waiting. I then started the job. Only 2 tape drives are loaded, and only 1 is writing (the 3rd drive is unloaded and idle). So that option isn’t helping me, at least not yet ...
Did you upgrade your backup chains and enable the per-VM backup files?
If there are two files to write to tape only, then two drives can be used in parallel.
I did check off the box to use all 3 drives on 1 job, that does have a 4TB FULL backup and a couple incrementals waiting. I then started the job. Only 2 tape drives are loaded, and only 1 is writing (the 3rd drive is unloaded and idle). So that option isn’t helping me, at least not yet ...
Give it some time to see if the third kicks in. It tends to be slow when starting.
Did you upgrade your backup chains and enable the per-VM backup files?
If there are two files to write to tape only, then two drives can be used in parallel.
I don’t understand what you mean “upgrade your backup chains and enable the per-VM backup files”?
Did you upgrade your backup chains and enable the per-VM backup files?
If there are two files to write to tape only, then two drives can be used in parallel.
I don’t understand what you mean “upgrade your backup chains and enable the per-VM backup files”?
If you go under the Disk section in the console and then under your backup job you can click on the job and the Upgrade Backup Chain will be available. That is what Joe means.
With VBR V12 was a new format of backup chains introduced. Up to V11 all VMs in a job were put into one big backup file, V12 and up can put each VM in job into a own backup file.
With this you will have smaller backup files for each VM which can be moved to tape individually and more tape drives can be utilized in parallel.
With VBR V12 was a new format of backup chains introduced. Up to V11 all VMs in a job were put into one big backup file, V12 and up can put each VM in job into a own backup file.
This will help with processing to tape as well - FYI.
Did you upgrade your backup chains and enable the per-VM backup files?
If there are two files to write to tape only, then two drives can be used in parallel.
I don’t understand what you mean “upgrade your backup chains and enable the per-VM backup files”?
If you go under the Disk section in the console and then under your backup job you can click on the job and the Upgrade Backup Chain will be available. That is what Joe means.
Don’t see that at all. Not under the job, or if right click and choose properties of the job.
I am running VBR 12.0.0.1420 P20230718. This was upgraded from an earlier v11.
I did check off the box to use all 3 drives on 1 job, that does have a 4TB FULL backup and a couple incrementals waiting. I then started the job. Only 2 tape drives are loaded, and only 1 is writing (the 3rd drive is unloaded and idle). So that option isn’t helping me, at least not yet ...
Give it some time to see if the third kicks in. It tends to be slow when starting.
Been over an hour, and still only 1 drive writing. Slow starting is one thing, but … LOL
Did you upgrade your backup chains and enable the per-VM backup files?
If there are two files to write to tape only, then two drives can be used in parallel.
I don’t understand what you mean “upgrade your backup chains and enable the per-VM backup files”?
If you go under the Disk section in the console and then under your backup job you can click on the job and the Upgrade Backup Chain will be available. That is what Joe means.
Don’t see that at all. Not under the job, or if right click and choose properties of the job.
I am running VBR 12.0.0.1420 P20230718. This was upgraded from an earlier v11.
Right-click the job name not the VM inside the job. The new backup chain pertains to the entire job.
Right-click the job name not the VM inside the job. The new backup chain pertains to the entire job.
Nope.
Right-click the job name not the VM inside the job. The new backup chain pertains to the entire job.
Nope.
Ok then you are already using the new backup chain format. No need to worry about it.
I have opened a ticket with support, to ask why I can’t utilize multiple drives for a tape job. We’ll see what they can figure out.
I have opened a ticket with support, to ask why I can’t utilize multiple drives for a tape job. We’ll see what they can figure out.
Probably the best route forward at this point and maybe they have some tweaks that can be done too. Best of luck and let us know how it goes.