Skip to main content

Hello to all,

I would need advices from you guys on how to approach to this setup the best way possible.

We have two locations. Both locations have 64TB Synology NAS and also on one side we have one additional NAS for air-gapped solutoin.

 

Now we would like to implement cloud storage and want to ask you guys what would be the best solution in configuration of this scenario.
Scale-out repository or similar.

At the moment we are doing backups on the one location (Vienna, Austria) and doing Backup Copy Jobs via WAN Acceleration to other location (Novi Sad, Serbia)

Would like to configure it to best practices and in the most secured way possible.
 

Do you have a preferred Cloud Vendor in mind?  Also, if you are using v12 of Veeam you can now do direct to object storage whether on-premises or cloud.  So, you might want to look into something like Wasabi or the like and you can send backup jobs or backup copy jobs there as well to meet the cloud storage requirement.


Hello Chris,

We don’t have preffered cloud storage vendor but will probably go wih Wasabi. We are not on v12 currently but will be if we need to.

What would be the configuration for this scenario, scale-our repository with archive and capacity tier or?

 

Best regards,
Nemanja


Hello Chris,

We don’t have preffered cloud storage vendor but will probably go wih Wasabi. We are not on v12 currently but will be if we need to.

What would be the configuration for this scenario, scale-our repository with archive and capacity tier or?

 

Best regards,
Nemanja

If you are not on v12 then yes SOBR with Capacity.  If you get to v12 before that you can go direct to object with a backup job or backup copy job.


What should be the best option in our case then if we have two locations.
Backup copy form main location to the second one and also backup copy job to Cloud storage?
Im trying to figure out what is the best option so that I dont need to reconfigure everything from time to time like now :)

Thank you


What should be the best option in our case then if we have two locations.
Backup copy form main location to the second one and also backup copy job to Cloud storage?
Im trying to figure out what is the best option so that I dont need to reconfigure everything from time to time like now :)

Thank you

Yeah, you could do the following -

  1. Backup job at primary site to primary NAS
  2. Backup Copy job from primary site to secondary site and vice versa if needed.
  3. Backup Job/Backup Copy Job/SOBR Offloading to Wasabi for Cloud - if you upgrade to v12 then the first two apply and don’t need SOBR Offloading with Capacity Tier.

If there is nothing that needs to be backed up at the Serbia site, then I would setup a backup at the primary site, and setup a SOBR at the secondary site with a copy job to copy the backup data to Serbia and also copy to Wasabi.  If you have data in Serbia that needs to go to Austria, then I’d setup the reverse as well.  Note that you may not want to use the same bucket.  If you’re using the same backup server to coordinate it all, then a single bucket may be fine, but I’d probably use a bucket for Serbia and one for Austria if there is a need for a SOBR in Austria.

As Chris mentioned, you could also have direct backup to object or direct copy to object storage, but for me, I would utilize a SOBR so that I could take advantage of immutability within Wasabi.


If there is nothing that needs to be backed up at the Serbia site, then I would setup a backup at the primary site, and setup a SOBR at the secondary site with a copy job to copy the backup data to Serbia and also copy to Wasabi.  If you have data in Serbia that needs to go to Austria, then I’d setup the reverse as well.  Note that you may not want to use the same bucket.  If you’re using the same backup server to coordinate it all, then a single bucket may be fine, but I’d probably use a bucket for Serbia and one for Austria if there is a need for a SOBR in Austria.

As Chris mentioned, you could also have direct backup to object or direct copy to object storage, but for me, I would utilize a SOBR so that I could take advantage of immutability within Wasabi.

You don’t have to use SOBR to take advantage of Immutability.  😉

Direct to Object going to Wasabi will do that as well in v12.


If there is nothing that needs to be backed up at the Serbia site, then I would setup a backup at the primary site, and setup a SOBR at the secondary site with a copy job to copy the backup data to Serbia and also copy to Wasabi.  If you have data in Serbia that needs to go to Austria, then I’d setup the reverse as well.  Note that you may not want to use the same bucket.  If you’re using the same backup server to coordinate it all, then a single bucket may be fine, but I’d probably use a bucket for Serbia and one for Austria if there is a need for a SOBR in Austria.

As Chris mentioned, you could also have direct backup to object or direct copy to object storage, but for me, I would utilize a SOBR so that I could take advantage of immutability within Wasabi.

You don’t have to use SOBR to take advantage of Immutability.  😉

Direct to Object going to Wasabi will do that as well in v12.

You know….as I was typing that, I was questioning if I was correct but figured somebody would correct me if needed.  I don’t have a lot of direct to object happening right now, although there are one or two….   😀


Thank you for the informations,

For the Backup Copy Job is it good to implement WAN Acceleration from primary site to secondary site and vice versa or?

Also what about security of it, enable encryption for regular ones and also for the backup copy jobs


And also is there a way to check backups from time to time automatically.
If they are not corrupted and that everything is working properly.

To restore them and try or similar


Hi @NemanjaJanicic, BCJ can offer advantages, depending on your bandwidth between the sites.

The general guidelines are:

<100Mbps: Use low bandwidth mode if working with a WAN accelerator. This will typically provide greater effective speeds vs native transfers.

100Mbps-1Gbps: Use high bandwidth mode if working with a WAN accelerator. This won’t provide better than native speeds, but it will provide similar speeds at a reduced effective throughput. One such scenario was seeing 1Gbps speeds, but with only a few hundred Mbps of bandwidth consumed on the actual WAN link.

 

Anything 1Gbps+, don’t use a WAN accelerator.

 

So let’s start with, what is your provisioned bandwidth between the sites? and how much of it is in use, so we can understand how much bandwidth is available.

 

As for security, I’d encrypt every backup that isn’t going to a deduplication device, better safe than sorry. As Veeam can still read the data (since it encrypted it after all), it doesn’t impact data efficiency for compression and deduplication.

 

RE the backup checking, yes there’s built in functionality for backup health checks :) 

 

See first option “storage-level corruption guard” Maintenance Settings - User Guide for VMware vSphere (veeam.com)

 

This checks your backups for general consistency, but you could also leverage SureBackup to perform a fully isolated (sandbox type) restore of your servers.


Hello @MicoolPaul
Thank you for the comment.

We have 1 Gbps connection between sites, to be exact download/upload is around 750 Mbps.
We are doing backups at night ours so really small portion of bandwidth is used at that moment because we are working from 8am to 4pm.

At the moment Backup Copy Jobs are set to “as new restore point appears”. We had few problems with access rights after GPO changes but now its okay.
We are searching for the best possible solution for us.
I’m not that advanced in Veeam Backups but I’m trying to learn to be better.

High bandwidth mode is activated for our WAN Accelerators.
We have two Accelerators.
One is used for our biggest VM that is around 8TB and other one is used for multiple smaller VM’s.

Currently we are not so confident in our Backup System so it's time to do it from scratch but do it in a propper way.

 


If that bandwidth is shared between site to site and WAN then WAN accelerator would make sense if you’re planning on uploading to cloud-based object storage too.

 

Currently we are not so confident in our Backup System so it's time to do it from scratch but do it in a propper way.

 

To start at the beginning, you should aim to do the 3-2-1-1-0 rule. Whilst it’s called a rule, I prefer to call it the minimum amount of effort you should invest in your backups.

 

3 Copies of data, this can include your production, but it’s better when you don’t. If you’re going to have a production copy, a local backup, a backup copy in another country, and backup to cloud, then that’s fantastic. You’re certainly meeting & exceeding the 3 copies of data. If you want to stop doing the site to site backup copy and instead swap that with cloud, you're still meeting this criteria.

2 different media. This can mean different things to different people, for some it’s using different file systems between backups (ReFS/NTFS/XFS etc), for others, it’s a vendor hardware break, so you can’t use the same RAID controller/manufacturer for all of your backups to prevent a firmware-level issue, but the (In my opinion) best classification of this, is different media types. Such as block-based storage, object-based storage, and tape. These different architectures can help preventing issues with bad code in one type of hardware, or the integration code within Veeam. The weakness I see here in your current setup is using the same Synology NAS’ everywhere.

1 backup copy stored offsite. You’re meeting this and evacuating it to well out of the country which is great, and offsite/cloud object storage would help too.

1 immutable/offline copy. This is where you’re probably the weakest, you mentioned an airgapped NAS, but ideally some more information would be useful here to determine how effective the airgap is. immutable object storage can help dramatically with this, and I personally would try to have a backup copy that exists on storage outside of you/your team’s control, meaning that you’ve got backups on storage that nobody who compromises your environment could then also delete, as they’d have to compromise another environment hosting that data.

0 verification errors. This refers to using technology such as SureBackup to spin up your backups and confirm they’re all happy & healthy, in advance of needing them for DR.

 

With that all out of the way, this is an example template of how your backup solution should aim to look. You’re doing well, and hopefully the feedback I’ve provided above is constructive. If you want to provide further details on what you want to achieve then I’m sure collectively as a community we can help 😊


Thank you @MicoolPaul for the complete explanation of the 3-2-1-1-0 rule. Now I need to do some research and planning for this. I will aim to do as much as I can to fulfill this rule.

After we plan everything then I would probably need some help with configuring all of that.
For the Air Gapped Synology NAS we just did manually copy latest backups and unplug it until we need it again.

I would probably configure in a way of backups going to main site (Austria) than separate backup copy jobs to cloud storage and to other location (Serbia).

Still need to advance in Veeam so that I know more advanced things as SureBackup and rest.
 


Comment