Skip to main content

Hi everyone,

We’re using Veeam B&R 11 standard edition.  We are a small museum with a large repository of information of digitised versions of the collections - essentially a file server with multiple disks.  We had an issue whereby our largest server had to be reimported to VSphere and was picked up as a new VM by Veeam.  I’m looking for options to minimise WAN usage to seed our offsite backups of the large VM.

Can I copy a new full backup of VM to a second site, scan the second site repo and start a parallel backup strategy there?  Essentially, the same full backup would be the first backup in two parallel backup trees.  The backup source is a single VM of approximately 40TB, so I would rather not run a full backup across our VPN.  I’m worried that Veeam will not like two repositories starting with the same data, if they ultimately diverge.

Why would I not just run a copy job instead of backing up the same data twice?  The two sites have vastly different retention strategies, one being essentially a tape cache, so I don’t think a backup copy job is appropriate.  Also, the remote site is limited to reverse incremental on Windows, with the local site using full, incrementals and synthetic full on a hardened Linux repository.  Our data doesn’t change much week to week (museum), so incrementals are small.

There will be a new full backup completed at the end of this week, so I can make a copy and bring it to the second site on a small NAS to copy into the Windows repo.

I understand that I can just try this and see if it works, but I would not like to depend on it if it’s not a typical strategy.

 

With thanks,

Simon

Hi Simon -

Yes, you can kinda do what you want and perform what is called Replica Seeding using Veeam Replication. See the User Guide here to learn more about it. What I would do is place a copy of your backup in a 2nd Repo if you can (not same one), then create a DR side VBR server and create/run a Replication job of your “file server” using this “seeded” backup copy. The only data that’ll traverse your WAN are change data. Another option is to make a copy of the VM and place it on your DR side and do a Replica Mapping job. This does the same thing as seeding, but instead of using a copy of a backup in a Repo, the job uses the copied VM you placed on the DR side and again will only replicate VM changes.

There are a couple caveats to be aware of → if you change the disk size of your source VM, the replication job will perform essentially a full replication. In other words, CBT (changes only) will not be performed. And, as Replication uses what is essentially VMware snapshot technology, you can only have 28 restore points for your replica VM.


Also, a few benefits of using Veeam Replication I didn’t mention...only mentioned caveats is: you have a DR strategy for this important VM. You can recover by failing over to it in a matter of literally minutes and have minimal downtime. You have the option of doing regular recoveries from it (i.e. file-level restores) if needed. You can do testing operations (updates?) and when done discard changes. Basically, there are a lot of upside to having a Veeam Replication strategy.


Yes, you can as @coolsport00 mentioned 


What Shane mentioned will work for a Replication strategy but if I read your post correctly you want to do backups of the same VM on the second site using the current repository as a seed.  If you are only able to use Reverse Incremental backups which are much slower and not recommended at the DR site and if the source repository uses Incremental style backups, I do not believe you can copy over the backup files to start a second copy of the VM on the DR site.  The backup chains are constructed much differently for each type of backup job.

You could look in to a backup strategy using WAN Accelerators to send the backup to both sites as another possible solution, but they need to be implemented at each site to choose a source and target accelerator.  WAN Accelerators - User Guide for VMware vSphere (veeam.com)

Also depending on the infrastructure on both sides that might play a role if you are using different hypervisors, etc.  Something to take in to consideration when planning things.


Good point Chris about the backup structure of each side. 👍🏻🙂

And, though it hasn’t been shared as to a ‘when’ exactly, the Reverse Incremental mode will be deprecated in the future. So that’s something to keep in mind in moving forward. 

I didn’t even bring up possibility of traversing the WAN for any kind of backup strategy because even with using WAN Accelerators, I think it would take quite a long time to get the initial seed of 40TB to the DR side. Best bet I believe is to place a copy the source data over there, then do some kind of strategy from that side - either a DR side backup, or Veeam Replication.


Good point Chris about the backup structure of each side. 👍🏻🙂

And, though it hasn’t been shared as to a ‘when’ exactly, the Reverse Incremental mode will be deprecated in the future. So that’s something to keep in mind in moving forward. 

I didn’t even bring up possibility of traversing the WAN for any kind of backup strategy because even with using WAN Accelerators, I think it would take quite a long time to get the initial seed of 40TB to the DR side. Best bet I believe is to place a copy the source data over there, then do some kind of strategy from that side - either a DR side backup, or Veeam Replication.

Thanks.  Yeah, 40TB would be much to move across the WAN but it was an option I wanted to throw out there, if not useful then what works best can be used.  😁


It needed to be shared 😊


It’s probably too late for this. but since you had to re-import the same VM and Veeam is seeing it as a different VM, there is a way (unsupported) to remap the MoRef ID of the reimported “new” VM to the old backup in the database so you’re not starting with a new backup.  Since you just happen to be using v11, you can still use this utility to remap the backup.

https://www.veeam.com/kb2136

If you were able to map to the existing backup, then you also wouldn’t need to copy a new chain for the copy job as it would reattach to the existing copy data.


 

Why would I not just run a copy job instead of backing up the same data twice?  The two sites have vastly different retention strategies, one being essentially a tape cache, so I don’t think a backup copy job is appropriate.  Also, the remote site is limited to reverse incremental on Windows, with the local site using full, incrementals and synthetic full on a hardened Linux repository.  Our data doesn’t change much week to week (museum), so incrementals are small.

There will be a new full backup completed at the end of this week, so I can make a copy and bring it to the second site on a small NAS to copy into the Windows repo.

 

Assuming that my previous reply about mapping the MoRef ID is not a valid solution at this point….

 

Here’s what I’ve done for this, and this is assuming that you want to just keep the data on the NAS and no import into a server at the recovery site.  Setup your repository to the NAS using a DNS name.  Have the NAS on-site at the primary location and setup your backup copy job.  Once the backup copy to the NAS has completed, disable the copy job, shutdown and move the NAS to the recovery site.  Bring the NAS online with it’s assumingly new IP address at that site (unless you are stretching your LAN in which case this is even easier).  Update DNS to point to the new IP address of the NAS.  Once it’s online and accessible via the DNS name, refresh your repository to verify that the NAS/repo is visible to Veeam.  You may want/need to update the proxy servers that have access to the NAS.  Resume your copy job and validate that copies are continuing to the NAS as expected.  If you’re using a stretched LAN between the two sites (guessing not since you’re using a VPN to connect the two), then you may only need to update the proxy used to access the repo and wouldn’t need to do any of the IP/DNS changes.

If you’re copying the data into a different server, you’d probably want to point your copy job to the NAS on-site or clone the job to copy on-site, and then you could likely copy the data from the NAS into the server’s repository at the recovery site, rescan the repo, and then map your backup copy job to that data in the repo.  I haven’t tried this, but I think would be fine in theory, but there’s a lot of details in the setup that you’d have to make sure match up.


Thank you everyone for your guidance.  The items that won’t apply in our situation are still great to know.

In our case, we do not have a hypervisor at the second site, so without a rethink on our DR strategy, I’ll have to keep going with the backups method.

I believe it is too late to try the MoRef remap, because I now have new backups for the large VM on our older local repository that I am reliant on right now while we work towards our retention limit.

The new backup that is currently running is to a newly-built repo, so I had hoped that it would be a clean backup that could be reused in the second location as a first full backup for a parallel backup, but from what Chris said above, the technology chosen (forward versus reverse incremental) would make the first full backup incompatible with the second site even on the first full backup.

I’m still reliant on the second site for the other VM backups, so the build local with appropriate hostname, then move repo to the second site with an IP update in DNS strategy would break other things (thanks anyway Derek).

It looks like there isn’t a software fix for my problem.

I’m starting to think that the correct thing to do here, given the future deprecation of reverse incrementals, is to ship my new local repo to the second site, copy the reverse incremental backup trees to a spare folder to maintain recovery of files, at least for our agreed retention period, bring the remote server back in-house and rebuild it as a Linux hardened repository and run a full backup to it.  Then reconstruct my local backups strategy on this machine.  That’s the least chance of anything corrupting and at the end I have two hardened repositories instead of one.  It’s very wasteful of space to maintain the orphaned backups alongside a new backup tree but I may get away with it for the retention period we have agreed on, depending on how efficient the XFS reflinks on ZFS compression turns out to be in my environment.

 

Thanks everyone again for all the input.

Simon


Sounds like a good plan.  Best of luck with everything and if you have any other questions we are always here to help. 😁


Sorry our suggestions couldn’t resolve what you’re needing, but glad we could provide some good info.

All the best!


Solid plan I think.  Best of luck with your endeavor.

 


The #1 thing you do not want to do is manually copy files, I think it was @JBuff → likes to say, “A backup of a backup is not a backup.”


Comment