Skip to main content

Veeam v13: Backup Copy Jobs Deep Dive

  • March 16, 2026
  • 3 comments
  • 172 views

eblack
Forum|alt.badge.img+2

Backup copy jobs are supposed to be the easy part of the second copy in a 3-2-1 design. In practice, they are one of the places Veeam gets misconfigured most often.

 

Start with the biggest decision: Immediate Copy or Periodic Copy

 

Immediate Copy mode

 

Immediate Copy is source-driven. A new restore point appears in the primary job, and the copy job wakes up and tries to move it right away. There is no separate schedule controlling when the copy runs. The result is a copy repository that stays very close to the primary, usually only lagging by however long the transfer itself takes.

That is the real advantage. If the primary repository fails shortly after the backup completes, the copy side probably already has that restore point.

The catch is GFS.

If a weekly GFS full is supposed to be created on Friday, but the source job did not produce a restore point on Friday, Immediate Copy has nothing to act on that day. No run means no GFS full. It does not come back later and fill in the missed slot. That part surprises a lot of people because they assume GFS behaves like a scheduled archive function. In Immediate Copy mode, it does not.

 

Periodic Copy mode

 

Periodic Copy is schedule-driven. The copy job runs at defined intervals and grabs the latest available restore point at that time.

That makes the behavior more predictable, but it also means you can lose restore-point granularity on the copy side. If the primary job runs every hour and the copy runs once a day, the copy repository does not end up with 24 restore points per day. It ends up with one per day. It is not mirroring every point. It is taking the most recent one available at the scheduled interval.

Where Periodic mode is better is GFS consistency.

Because the copy job runs on its own schedule, the GFS full can be created on the scheduled day even if the primary source restore point arrived earlier. And if the scheduled GFS point is missed because of a failed run, Veeam creates it after the next successful run. That is much more forgiving than Immediate mode for environments where GFS retention is part of the real compliance or archive strategy.

 

Which mode makes sense where

 

The easiest way to think about it is this:

Immediate Copy is about tighter RPO.

Periodic Copy is about control.

If the goal is an offsite DR copy over a reliable link and you want every restore point copied as soon as it exists, Immediate mode is usually the right fit.

If the target is a slower WAN, a dedup appliance, a cloud target with cost sensitivity, or a copy job where GFS behavior matters more than same-day granularity, Periodic mode is usually the safer choice. High-frequency primary jobs also push people toward Periodic mode, because Immediate mode can create a nonstop stream of copy work the target side may not keep up with cleanly.

 

GFS retention on copy jobs does not behave like people assume

 

A lot of admins think the copy job just inherits the primary retention story.

It does not.

When GFS is enabled on a backup copy job, Veeam flags full backup files as weekly, monthly, or yearly archive points. Once a full gets one of those flags, it is effectively sealed for that retention purpose. Short-term retention does not get to modify or delete it. That file now belongs to its GFS period, not to the rolling short-term chain logic.

That is why the sizing math changes.

If you tell the copy job to keep 14 restore points and also keep weekly and monthly GFS points, the repository does not just hold '14 plus a little extra.' It holds the active short-term chain plus all the GFS-flagged fulls still inside their weekly, monthly, and yearly windows. Those GFS fulls are not being merged away just because short-term retention says the active chain can move on.

 

Synthetic Full versus Active Full matters more on dedup targets

 

 

Veeam uses synthetic full creation by default for GFS archive points, which is fine for a lot of targets. It builds the full from data already on the copy side rather than reading it again from the source.

That is efficient in one sense, but it can be ugly on dedup appliances because synthetic creation generates random I/O patterns. Platforms like Data Domain, StoreOnce, and ExaGrid tend to behave better when the writes are sequential.

That is why Active Full is usually the better choice for GFS creation on dedup targets. It costs more in source I/O and transfer bandwidth, but the target side handles the write pattern better.

 

The easiest GFS mistake is yearly-only retention

 

This is one of those settings that looks fine on paper and causes a bad chain shape later.

If the copy job keeps only yearly GFS points and does not also break the chain with weekly or monthly cycles, you can end up with one full and a very long stretch of incrementals. That increases restore complexity and makes chain health more fragile than it should be.

The safer move is to include weekly and monthly GFS periods too, or force periodic full creation often enough that the chain does not run away from you.

 

Backup-job source versus repository source

 

A copy job can watch either named backup jobs or an entire repository.

Source by backup job is usually the cleaner option for most environments. It gives you tighter control, and when new VMs are added to the source job, they naturally come along with the copy scope.

Source by repository is useful when the policy goal is broader than individual jobs. MSP-style environments are the obvious example. If the goal is 'everything in this repository gets copied offsite,' then repository-based sourcing is easier to maintain than keeping a matching copy configuration for every source job.

The important point is just to make that decision deliberately, because it affects how scope changes over time.

 

Seeding is worth it when the first copy would otherwise be brutal

 

Large first-run copy jobs over slow links are where seeding pays for itself fast.

The idea is simple: create the source backup chain normally, move that data physically to the remote site, rescan the target repository there, and then map the backup copy job to that preloaded data so only the incrementals have to cross the WAN afterward.

What trips people up is sequencing.

The seed has to match what the source chain looked like when it was copied. The files should not be modified or transformed before the mapping step. If the source job ran more incrementals after the seed was created, that is fine. Veeam can send those changes later. But the seeded full and incrementals need to remain consistent with what was transported.

That is where many 'seeded' copy attempts go wrong. They are not really mapped from a clean seed state.

 

SOBR works as a target, but it has rules

 

A backup copy job can absolutely target a Scale-Out Backup Repository, but the behavior is not magic.

The copy data lands in the performance tier first. From there it follows the same general offload logic as other backup data, which means inactive sealed chains can move to the capacity tier according to policy.

That matters especially for GFS fulls. Once those archive fulls are sealed and old enough to meet the SOBR move threshold, they can offload to capacity tier. The active short-term chain does not, because it is still active and unsealed. So the performance tier still has to be sized for the active chain plus any GFS fulls that have not yet aged enough to move.

There is another practical issue here too: if extent health is bad and part of the chain lives on an extent that is sealed or unavailable, the copy job can break on the next run. On a SOBR target, extent health matters just as much as the nominal capacity total.

And forever-forward incremental chains do not play especially well with Move mode logic because there is no sealed inactive chain for Move mode to work on. In those cases, Copy mode is usually the better fit if you need offload behavior.

 

The common failure patterns are not mysterious

 

Most backup copy job complaints come down to the same handful of causes.

If a GFS full was not created, check the mode first. Immediate Copy can miss the scheduled day entirely if the source job did not create a restore point that day. If the job is Periodic, then check whether the scheduled run itself actually succeeded.

If the copy job keeps warning that some VMs are not protected, that usually means the copy job cannot move all required data before the next interval arrives. That is not a philosophical warning. It is a throughput problem.

If the copy side is consistently more than a day behind the primary in Periodic mode, the schedule and the available bandwidth no longer match the data change rate. That means reducing scope, changing interval, or adding WAN acceleration.

If the job fails because the target has no free space and the target is a SOBR, check the performance tier first. New copy chains do not start by writing directly to the capacity tier just because total SOBR capacity exists on paper.

 

Final thoughts

 

Backup copy jobs are one of those features that look simple until you need them to behave predictably.

The biggest mistakes usually come from assumptions: assuming the copy mirrors every primary restore point, assuming GFS behaves the same in both modes, assuming short-term retention controls everything on the copy side, assuming seeding is just 'copy files over there,' and assuming SOBR total capacity means write headroom exists where the job actually needs it.

If I were setting these up in production, I would make four decisions carefully before anything else: pick the copy mode based on RPO versus schedule control, size retention with GFS fulls included instead of forgotten, seed large WAN copies instead of brute-forcing the first sync, and treat SOBR performance-tier capacity as operational space rather than theoretical space. That is what keeps the second copy from becoming the second surprise.

3 comments

Chris.Childerhose
Forum|alt.badge.img+21

Great article Eric, very detailed.  A good read for those that want to use Backup Copy Jobs. 😎


coolsport00
Forum|alt.badge.img+22
  • Veeam Legend
  • March 16, 2026

Nice detailed BCJ writeup ​@eblack . I did a 3-part series of this topic a few yrs back, but a lot has changed since then (v11). 😊 Appreciate the share!


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 16, 2026

Nice detailed BCJ writeup ​@eblack . I did a 3-part series of this topic a few yrs back, but a lot has changed since then (v11). 😊 Appreciate the share!

I decided to dig in after seeing there were a number of changes. 👉👈🤝