Modes, GFS Retention, Seeding, and SOBR Targets
Backup copy jobs are the mechanism that delivers the second copy in your 3-2-1 strategy. They are also one of the most frequently misconfigured features in Veeam. The mode choice between Immediate and Periodic has consequences that are not obvious until a GFS full does not get created when expected. The retention interaction between primary jobs and copy jobs surprises people who assume the copy just mirrors the primary's retention. Seeded copy jobs for WAN environments require a specific setup sequence that most people skip, and then wonder why the initial copy runs for three weeks over the WAN when it could have seeded from a local drive.
This post covers backup copy jobs completely: how the two modes work and when to use each, GFS retention on copy jobs with the specific caveats that catch people, seeded copies, source selection, SOBR as a copy target, PowerShell automation, and what failure patterns look like and how to diagnose them.
1. Immediate Copy vs Periodic Copy Mode
This is the most impactful decision in copy job configuration and the one that is most often made without fully understanding the difference.
Immediate Copy Mode
In Immediate Copy mode, the backup copy job wakes up whenever a new restore point is created by a source backup job and immediately copies it to the target repository. There is no schedule. Every new primary backup triggers a copy run. The copy target stays as current as the primary, typically with a lag of only as long as the copy transfer takes.
The RPO advantage is real: if the primary repository fails two hours after a backup job completes and before the next backup runs, Immediate Copy mode means the copy repository already has that restore point. Periodic mode might not have transferred it yet.
The trade off: GFS fulls are not created if the copy job did not run on the day the GFS full was scheduled. This is documented explicitly in the official Veeam helpcenter. If your Immediate Copy job is source-driven and the source backup job did not run on Friday (the day configured for weekly GFS fulls), no weekly GFS full is created for that week. There is no catch-up mechanism for Immediate Copy mode. The GFS slot is simply missed.
Periodic Copy Mode
In Periodic Copy mode, the copy job runs on a defined schedule (every N hours, daily at a specific time) and copies the latest available restore point that exists at the time the job runs. It does not copy every new restore point, just the latest one at each scheduled run.
The consequence is that you can lose restore point granularity. If your primary job runs every hour and your copy job runs daily, the copy repository has one restore point per day while the primary has 24. You are not mirroring every primary restore point, you are sampling the latest one at each copy interval.
The GFS behavior is more reliable in Periodic mode: if the GFS full was scheduled for Friday and the copy job runs daily, VBR will create the GFS synthetic full on Friday from the latest available backup chain data, even if the primary backup job ran on Thursday night. GFS fulls in Periodic mode are created on schedule regardless of when the source data arrived. If a GFS synthetic full was not created on its scheduled day, VBR creates it after the next successful run.
Which Mode to Use
| Use Case | Recommended Mode | Reason |
| Offsite copy to DR site over reliable link | Immediate Copy | Best RPO. Every restore point is copied as soon as it exists. |
| Copy to tape or slow WAN link | Periodic Copy | Controls when bandwidth is consumed. Does not compete with primary backup windows. |
| Copy job with GFS retention as primary compliance mechanism | Periodic Copy | GFS fulls are created reliably on schedule regardless of source backup timing. |
| Cloud target (S3, Azure Blob) with egress cost | Periodic Copy | Controlling when transfers happen controls egress cost. Immediate Copy can trigger unexpected egress charges. |
| High-frequency primary backups (hourly or shorter) | Periodic Copy | Immediate Copy would trigger a copy run on every primary run. The copy target can not keep up with 24 copies per day. |
2. GFS Retention on Copy Jobs
GFS retention on a backup copy job works by flagging full backup files with weekly (W), monthly (M), or yearly (Y) flags. Once a GFS flag is assigned to a full backup file, that file can no longer be deleted or modified. Short-term retention cannot touch it. The retention policy applies on top of short-term retention: VBR keeps the GFS-flagged fulls until their GFS period expires, and manages regular backup chain files with the short-term retention point count independently.
GFS Methods: Synthetic Full vs Active Full
When creating GFS archive fulls, VBR uses synthetic full creation by default. It reads data from the existing backup chain on the copy target and synthesizes a full backup without re-reading from the source. This is efficient but generates random I/O on the copy target, which is a problem for deduplication appliances (ExaGrid, Data Domain, StoreOnce) that are optimized for sequential writes.
For copy jobs targeting deduplication appliances, switch the GFS method to Active Full. Active Full reads directly from the source backup repository and transfers a full copy to the target. The writes to the dedup appliance are sequential, which the appliance handles efficiently. The trade off is higher source I/O and more WAN bandwidth consumed during GFS full creation.
The Short-Term Retention Interaction
When GFS is enabled, short-term retention counts restore points only in the active backup chain, not across the entire combination of all backup chains on the copy target. GFS-flagged full backups start new backup chains. VBR stops merging incrementals into them because they cannot be modified. The active chain is the one between the most recent GFS full and the present. Short-term retention manages that active chain. Everything behind a GFS flag is managed by the GFS retention period alone.
The practical result: if you set 14 restore points of short-term retention and enable weekly GFS, your copy target holds the last 14 restore points in the current chain plus at least one GFS-flagged full per week going back however far your weekly retention period extends. Size copy target repositories to account for GFS-flagged fulls on top of your short-term retention target, not instead of it.
If you enable only yearly GFS retention with no weekly or monthly cycle, you can end up with one full backup file and a very long chain of incrementals spanning the entire year. Long incremental chains increase restore time and one corrupted incremental can break the entire chain. Configure weekly and monthly GFS cycles as well, or set a periodic full backup schedule on the copy job to break the chain at regular intervals.
3. Copy Job Source: Backup Job vs Backup Repository
A backup copy job can draw from two different source types: a specific backup job, or a backup repository. The choice affects which restore points are available, how GFS flags are applied, and what happens when source jobs change.
- Source: Backup Job. The copy job monitors specific backup jobs you name. When any of those jobs create a new restore point, the copy job picks it up. Adding a new VM to the source backup job automatically includes it in the copy job scope. Removing a VM from the source job stops new restore points for that VM from being copied, but does not remove existing restore points from the copy target until retention expires them.
- Source: Backup Repository. The copy job monitors all backups stored in a specified repository and copies everything there. Useful for MSP scenarios where you want to copy everything in a repository regardless of which jobs produced it. The scope changes automatically as backups are added to or removed from the repository.
For most environments, sourcing from specific backup jobs gives you tighter control. Sourcing from a repository is better when you want to apply a uniform offsite copy policy across all jobs on a site without managing copy configurations per job.
4. Seeded Copy Jobs for WAN Environments
The first run of a backup copy job to a remote site has to transfer the full backup data. For a 10 TB environment over a 100 Mbps WAN link, that is roughly nine days of continuous transfer. A seeded copy job eliminates this initial blast by pre-loading the target repository with a copy of the backup data, then pointing the copy job at that seed as its starting point. The copy job only needs to transfer changes from there forward.
The Seeding Workflow
- Run the source backup jobs normally until you have a full backup chain you want to seed from.
- Copy the backup files to portable media (external drives, NAS, shipping drives) and physically transport them to the remote site.
- At the remote site, place the backup files in a directory on the target repository server.
- In VBR, rescan the target backup repository so VBR indexes the seeded backup files.
- Create the backup copy job. On the Target step of the wizard, click the Map backup link and select the seeded backup files as the starting point. VBR maps the copy job to the existing data and begins copying only incremental changes.
The seeded backup files must match exactly what is in the source repository at the time you set up the copy job. If source backup jobs have run additional incremental backups since you created the seed, that is fine, VBR will copy those incrementals over the WAN. But the VBK (full backup) and VIB (incremental) files must not be modified or reprocessed after transport. Do not run any retention or transformation operations on the seeded files before the copy job is mapped to them.
5. Copy Jobs Targeting SOBR
A backup copy job can target a Scale Out Backup Repository as its destination. The copy job data lands in the SOBR performance tier and participates in the same capacity tier offload and archive tier processes as primary backup data. Three constraints to know when using a SOBR as a copy job target:
- GFS-flagged fulls and Move mode. GFS-flagged full backups in copy jobs are treated as sealed chains once the active short-term chain moves forward past them. When they age past the SOBR Move threshold, VBR offloads them to the capacity tier the same as any other inactive sealed chain. What stays on the performance tier is the active short-term chain, because it is never sealed and cannot be moved. Size the performance tier for the short-term chain plus GFS fulls that have not yet reached the Move age threshold, not just the short-term operational restore window.
- Chain integrity on the performance tier. If the SOBR Data Locality policy moves part of a backup chain to a sealed or evacuated extent and that extent goes offline, the copy job chain breaks on the next run. Monitor extent health on SOBRs that are copy job targets.
- Forever forward incremental and Move mode. Forever forward incremental chains are always active because the single full backup is never sealed. Move mode requires an inactive sealed chain to operate, so Move mode has nothing to act on with a forever forward incremental chain. This is true for both primary jobs and copy jobs. Use Copy mode if your copy job produces a forever forward incremental chain and you need capacity tier offload.
6. PowerShell Automation
Three scripts below: creating a copy job with GFS retention, reporting on copy job lag and GFS flag status, and monitoring copy job health.
POWERSHELL: CREATE A BACKUP COPY JOB WITH GFS RETENTION (PERIODIC MODE)
Connect-VBRServer -Server "vbr-server.domain.local" $sourceJobs = @( Get-VBRJob -Name "Backup - Production VMs", Get-VBRJob -Name "Backup - Database Servers" ) $targetRepo = Get-VBRBackupRepository -Name "DR-Site-Repo" $gfsPolicy = New-VBRBackupGFSRetentionPolicy ` -IsWeeklyEnabled $true -WeeklyRetentionPeriod Weeks4 ` -IsMonthlyEnabled $true -MonthlyRetentionPeriod Months12 ` -IsYearlyEnabled $true -YearlyRetentionPeriod Years7 $copyJob = Add-VBRBackupCopyJob ` -Name "Copy - Production to DR" ` -SourceJob $sourceJobs ` -BackupRepository $targetRepo ` -RestorePointsToKeep 14 ` -GFSRetentionPolicy $gfsPolicy ` -Description "Daily copy to DR site with 7-year GFS retention" Write-Host "Copy job created: $($copyJob.Name)" Disconnect-VBRServer
POWERSHELL: REPORT ON COPY JOB LAG AND GFS FLAG STATUS
Connect-VBRServer -Server "vbr-server.domain.local" $copyJobs = Get-VBRJob | Where-Object { $_.JobType -eq 'BackupCopy' } foreach ($job in $copyJobs) { Write-Host "`n=== $($job.Name) ===" Write-Host " Mode: $($job.CopyMode)" Write-Host " Last Result: $($job.GetLastResult())" Write-Host " Last Run: $($job.LatestRunLocal)" $backup = Get-VBRBackup | Where-Object { $_.JobId -eq $job.Id } if ($backup) { $points = Get-VBRRestorePoint -Backup $backup | Sort-Object CreationTime -Descending Write-Host " Restore Points: $($points.Count)" $gfsPoints = $points | Where-Object { $_.GetGFSFlags() -ne 'None' } if ($gfsPoints) { Write-Host " GFS Points:" $gfsPoints | ForEach-Object { Write-Host " $($_.CreationTime.ToString('yyyy-MM-dd')) - $($_.GetGFSFlags())" } } } } Disconnect-VBRServer
POWERSHELL: MONITOR COPY JOB HEALTH AND DETECT JOBS RUNNING BEHIND
Connect-VBRServer -Server "vbr-server.domain.local" $copyJobs = Get-VBRJob | Where-Object { $_.JobType -eq 'BackupCopy' } $threshold = (Get-Date).AddHours(-25) $issues = @() foreach ($job in $copyJobs) { $lastRun = $job.LatestRunLocal $lastResult = $job.GetLastResult() $lag = if ($lastRun) { [math]::Round(((Get-Date) - $lastRun).TotalHours, 1) } else { 999 } $warning = $false $reason = @() if (-not $lastRun -or $lastRun -lt $threshold) { $warning = $true $reason += "Last run: $(if ($lastRun) { "$lag hours ago" } else { 'never' })" } if ($lastResult -eq 'Failed') { $warning = $true; $reason += "Last result: FAILED" } if ($lastResult -eq 'Warning') { $reason += "Last result: WARNING" } if ($warning -or $lastResult -eq 'Warning') { $issues += [PSCustomObject]@{ JobName = $job.Name LastRun = if ($lastRun) { $lastRun.ToString("yyyy-MM-dd HH:mm") } else { "Never" } LagHours = $lag LastResult = $lastResult Issues = $reason -join "; " } } } if ($issues.Count -eq 0) { Write-Host "All copy jobs running on schedule with no failures." } else { $issues | Format-Table -AutoSize $issues | Export-Csv "C:ReportsCopyJob-Health-$(Get-Date -Format 'yyyyMMdd').csv" -NoTypeInformation } Disconnect-VBRServer
7. Common Failure Patterns and How to Diagnose Them
GFS Full Not Created on Schedule
The most common GFS complaint. Check the copy job mode first. If it is Immediate Copy mode and the source backup job did not run on the day the weekly GFS full was scheduled, no GFS full was created. The fix is switching to Periodic Copy mode for any copy job where GFS reliability matters, or accepting that Immediate Copy mode can miss GFS creation on days when the source job does not run.
If the copy job is Periodic mode and a GFS full still did not get created, check whether the copy job completed successfully on the scheduled GFS creation day. If the copy job ran with warnings or a partial result, VBR may not have had sufficient data to create the GFS full. Check the job log for that specific run.
Copy Job Always Shows Warning with "Some VMs are not Protected"
This happens when a backup copy job cannot process all VMs in the time window before the next run starts. The cause is usually a copy interval that is too short relative to how much data needs to transfer, or WAN bandwidth insufficient to transfer all VMs between copy runs. Increasing the copy interval or reducing the scope per copy job resolves this.
Copy Job Running Behind Primary by More Than One Day
In Periodic Copy mode, if the copy job is set to copy daily but is consistently behind by more than one primary backup cycle, the WAN link between source and target is not fast enough to transfer the changed data volume within 24 hours. Options: reduce the amount of data per copy job by splitting VMs across multiple copy jobs, add WAN acceleration, or increase the copy interval to every 48 hours and accept a deeper RPO on the copy.
Copy Job Failing with "Target Repository Has No Free Space"
On a SOBR target, this usually means the performance tier is full even though the SOBR summary shows available capacity in the capacity tier. VBR cannot write to the capacity tier directly for new backup chains. Add a performance tier extent or reduce the offload age threshold so data moves to the capacity tier faster. On a standard repository, size to hold short-term retention plus GFS-flagged fulls across their full retention windows.
KEY TAKEAWAYS
- Immediate Copy mode copies every new restore point as it is created. Best RPO. GFS fulls are NOT created if the source backup job did not run on the scheduled GFS creation day. Use Periodic Copy mode when GFS reliability matters more than minimum RPO on the copy.
- Periodic Copy mode copies the latest available restore point at each scheduled interval. You lose granularity on high-frequency primary jobs but GFS fulls are created reliably on schedule. If a GFS full was missed, VBR creates it after the next successful copy run.
- GFS-flagged full backups cannot be deleted or modified by short-term retention. Once flagged, the file is owned by its GFS retention period. Short-term retention manages only the active chain between the most recent GFS full and the present.
- Only yearly GFS with no weekly or monthly cycle creates a dangerously long incremental chain. Configure weekly and monthly GFS cycles as well, or set a periodic full backup schedule on the copy job to break the chain at regular intervals.
- For deduplication appliance targets, switch GFS creation to Active Full mode. Synthetic full creation uses random I/O that hurts dedup appliance performance. Active Full writes sequentially.
- Seeded copy jobs eliminate the initial WAN blast for large environments. Copy backup files to portable media, ship to the remote site, rescan the target repository, then map the copy job to the seeded data. VBR transfers only incrementals forward from the seed point.
- On a SOBR with Move mode, GFS-flagged fulls are moved to the capacity tier once they are sealed and age past the Move threshold, same as any other inactive chain. The active short-term chain stays on the performance tier because it is never sealed. Size the performance tier for the short-term chain plus GFS fulls that have not yet aged out to the capacity tier.
- "Some VMs are not protected" warnings on copy jobs mean the job cannot transfer all VMs within the copy interval. Increase the interval, reduce the job scope, or add WAN acceleration.
Full article with code examples and interactive diagrams:
https://anystackarchitect.com
