Skip to main content

I posted the below on Reddit, let me know your thoughts...

We're in talks at work to make a new GFS repo where we keep X daily Y weekly Z monthly and maybe a single yearly on the Veeam repository and also take a copy monthly/yearly from said repo to tape and then off to the safe deposit box at the bank. What if I lose my repo and all that data. Yeah, I might have a few monthly/yearly at the bank, but I'm hosed on the daily/weekly side. All that history, lost. Is anyone have a redundancy approach they're using? We're thinking about using a couple Synology boxes in in HA mode basically where they basically stay in sync with each other. Let me know what you think. I also think we could use a second Synology box do do backup copy jobs and keep the same GFS retention points there...

Hi @jaceg23 -

Well, a little clarification here → GFS is only configured for Weekly’s, Monthly’s, and Yearly’s. No Daily’s. What I think you’re referring to though is having your Job’s Short-Term retention backups and your GFS’s all on the same Repo?

But yes...at first read through your post, if you lost that Repo, you would lose any Short Terms, Weekly, Monthly, and potentially Yearly backup files on the Repo that isn’t copied elsewhere. 

What you could do to have a copy of everything on a secondary storage device is enable “Configuring a secondary location” to store your files. But in doing so, you have to have a supported storage array type connected to Veeam:
 


GFS & Copy Backups Configs

I can’t comment on your Synology idea, since I currently don’t use it. I use Nimble. With Nimble, you have the ability to configure Repllication Partners and replicate your prod (or backup?) storage Volumes to a partner array. I can configure them to replicate daily or as infrequent as weekly. The first replica is a ‘full’ of the Volume on the partner array, then subsequent replicas are incrementals of changes only. So, that approach is an option. Arrays do have dedup/compression capabilities, but don’t have Block Cloning to significantly reduce your storage footprint of keeping backup copies like you could have if you were to use your storage in a GFS-based solution through Veeam. And, though not an exact science, you can utilize Veeam Calculators to guesstimate the space you need for GFS, whereas with array replication, it’s a best guess situation (i.e. there’s no calculator, etc. to help you make an educated guess on how much storage you’d need for all your replicas).

Hopefully what I shared helps you a bit. 


To add to this if you use Synology for your environment like VMware then you can replicate LUNs across them for added protection.  You can do a few ways of replication with Synology even without the VMware environment. 


Hi @jaceg23, personally I wouldn’t replicate between your Synology devices as I wouldn’t class it as an independent copy of your data. If you corrupt the source, the corruption syncs to the destination. You’ve then got to rely on your replication retention to have something recoverable which is extra overheads on the storage, and if your replications are taking place whilst metadata updates are taking place, you could have a corrupt set of files you’re trying to recover.

 

I’d rather use the KISS approach, and create a Veeam native backup copy job. You can have your own retention for this, your backup copies won’t update until your backups are complete so you’ve always got at least one static file, and then you’ve got a repository that already exists within Veeam’s configuration to recover immediately without faffing about with restoring Synology snapshots.

 

I’m not saying this to rubbish your idea, just to share some pain points I’ve seen over the years! 🙂


I had someone mention on Reddit about “Scale-out Backup Repository in copy mode” and “ScaleOut Backup repository with performance tier and capacity tier in Mirror mode”. Are these the same? Would this require storage from a 3rd party? Where can I learn more about this approach?


You can use your own storage or 3rd party for the Capacity Tier.  You can learn more here - Scale-Out Backup Repositories - User Guide for VMware vSphere (veeam.com)


Hi @jaceg23 , I actually thought about sharing about SOBR but ended up forgetting to post 😂

 

You can read more about SOBR here:

https://helpcenter.veeam.com/docs/backup/vsphere/backup_repository_sobr.html?ver=120


And, you want to focus on Capacity Tier and Copy Mode. Copy will do just that... Copy your backups as soon as they're done. 


Hi @jaceg23 -

I was not able to provide more context to my last comment and wanted to do so to hopefully clarify some of the options you have, even moreso specifically to SOBR Copy/Move to Capacity Tier. We recently had a webinar on this topic in the VMCE Study Hall a little over a week ago. It was a good discussion because those modes can be a little confusing.

There are some limitations and/or requirements to Move mode (not so much with Copy mode) you need to be aware of so you know exactly where your Backup Files are located when you use SOBRs. When you initially backup, they will obviously be on your Performance Tier; and if you select the Copy mode when you add a Capacity Tier, Veeam will immediately copy your backups to Capacity Tier once the backup job completes. So you’d then have 1 copy on Performance Tier and 1 copy on Capacity Tier. Pretty simple. The only item to be aware of when setting Copy mode up is the “Window” link at the bottom of the ‘Add Capacity Tier’ SOBR wizard. This “window” is a timeframe, or window, you can configure to ‘allow’ or prohibit” when Veeam is allowed to Copy, or Move if you have that mode configured, your backups to Capacity Tier. 

Move mode is different in a couple respects → for this mode, you have to configure an “operational restore window”. Up until recently, I didn’t know what the heck that was 😂 What this is is basically a timerframe you configure when you are most likely to do restores of your data. Typically, this is within 7-14 days. As such, you want as fast restores as possible, so you want to restore your backup data from Performance Tier. And during this ‘window’ Veeam is not allowed to Move your backup files. After the configured days for this setting, Veeam will Move your data to Capacity Tier; afterwards, you now do not have data on Performance & Capacity, but just on Capacity. According to the Guide, Veeam does this every 4hrs. A caveat to this? A couple things: 1. at the end of 4hrs you could still be in your “prohibited” operational window you configure (if you configure this); 2. active backup chain → Veeam only moves inactive backup chain files, which are “sealed” by a Full. There’s a bit more validation which needs to take place you can check out in the Guide, but generally speaking, that’s how Moved mode works.

So there you have it...Copy vs Move 🙂 So, between this option, or storage replication (either via array directly or via Veeam), or GFS...which should you use? Great question! Well, it will honestly depend upon your environment and the ease vs complexity (I think SOBRs can be a bit complex in configuring as well as knowing where your backup data resides at any given time) you’re willing to configure and maintain. From a high-level, as I mentioned previously, with GFS you can use a calculator to guesstimate how much storage you’ll need. When you enable this, be aware all your data → short-term and long-term (GFS) data will be on the Repository you configure for the Job. You also have the luxury of using Block Cloning here so your Fulls won’t be outrageously large. Configuring a “secondary location” in your Jobs just replicates all this data to a partner/secondary array, so this is basically Storage Snapshot orchestration. You can’t really control location there too much. Whereas with SOBR, you can add almost any kind of Object Storage you want to your Veeam Console and configure those as Capacity Tier options for your jobs. Much more storage control. It’s advisable to be up-to-speed on knowing where your backup data is residing is all.

Well, that’s all I know about options of where to keep your backup data short vs long term. Choice is yours 🙂 Let us know if you still have questions.


@coolsport00 

Thank you for the lengthy reply, honestly..really. Looks like SOBR with Copy mode is something I might be interested in. I just need to pay attention to the "window" of alloted times Veeam will be able to copy backup data from Performance to Capacity tier. This should give me a 1:1 copy of the data in Performance tier in my Capacity tier if I'm understanding it correctly. I just need to find something to test said Performance/Capacity tiers with in my test environment and go from there. Or, I can as ​
@MicoolPaul  stated just apply the KISS method and use the native Veeam Backup Copy Job to another repo and that should also give me what I'm looking for.


@coolsport00

Thank you for the lengthy reply, honestly..really. Looks like SOBR with Copy mode is something I might be interested in. I just need to pay attention to the "window" of alloted times Veeam will be able to copy backup data from Performance to Capacity tier. This should give me a 1:1 copy of the data in Performance tier in my Capacity tier if I'm understanding it correctly. I just need to find something to test said Performance/Capacity tiers with in my test environment and go from there. Or, I can as ​
@MicoolPaul  stated just apply the KISS method and use the native Veeam Backup Copy Job to another repo and that should also give me what I'm looking for.

No problem.

Yep, a Copy Job would also work. Many options available 😊


Hi @jaceg23,

 

Just to highlight something to you here as it seems a bit glossed over, capacity tier in a SOBR is only for object storage, so your second Synology will not be compatible with this. Hence my suggestion of a backup copy. You could certainly use a capacity tier for a second copy of data if you’ve got some object storage you want to work with. If you’re evaluating options you could use something such as Scality ARTESCA’s virtual appliance, or you could use cloud-based object storage in an OpEx financial model such as the big three Azure/AWS/GCP or another provider such as Wasabi.


For your production build, this brings up another great topic, immutability. For block-based storage such as Synology, the only way you’ll get any type of immutability would be locking the device down and using it via iSCSI as a storage volume for a hardened Linux repository. Whilst when using object storage, if you’re purchasing hardware it’s typically pre-locked down solutions with direct attached storage, or it’s a cloud provider where you don’t have any admin/root access to modify/purge data if your tenant was compromised. Another immutability option away from all this is the good old fashioned air-gap with tape 🙂


Comment