Skip to main content

Dear All, we have following data protection requirement, so looking forward for your guidance for creating a Backup & retention policy. 

  • File Server - at present we have 5TB files & folders currently backing up using Veeam.
    • The requirement is that we need a 36 months of retention period of data and the backup job should run every 1 hour.
    • The objective is to restore any file or folder with in the period of 3 years.
  • VM Backup with at least 2 restore points with daily incremental and weekly full. 

Thank you in advance

Hey, so your retention for the VM will depend on which type of backup chain you use, reverse or forward incremental. Any solution can give you the “at least” 2 restore points, there are a few considerations around storage performance but I won’t dilute my reply here going on a tangent. For your reading here’s a link to backup methods. It’s for vSphere but it’ll work the same for Hyper-V as it’s nothing to do with the hypervisor.

 

I would recommend using a modern file system such as ReFS or XFS to make use of Veeam’s Fast Clone technology if you decide to use synthetic fulls (create a new chain weekly as a full independent backup but using your existing backups as a source to fetch existing data from) or active fulls (create a new chain based on reading all data from your production again). ReFS is for Windows and XFS is Linux, further reading on Fast Clone here and the requirements for each system.

 

As for your file server that’ll depend on whether the file server is an actual server or a NAS (assuming a server) and whether it’s a VM or not (as you’ve mentioned it independently I’m assuming not). If it’s physical you’ll need to use Veeam Agent, if it’s in the cloud you should use the relevant Veeam for <public cloud here> product, if it’s a VM or NAS then use Veeam Backup & Replication (you could also manage the Veeam Agent job from this product anyway for single pane of glass management). Providing your production and backup infrastructure are both fast enough this is easily achievable.

 

Final consideration I’d like to mention is that you should look to achieve the 3-2-1-1-0 best practice rule in your backups, I won’t reiterate it all here as here’s an awesome blog post by a Veeam employee. Crucially though if you need to retain data for 36 months, then it’s likely you’ll be in a lot of trouble if this falls short. Which is why it’s important to consider the multiple copies of data, off site requirements and ransomware resistance. Do give the article a read, it’ll give you a lot more considerations to make sure you address any assumptions that are being made about your data protection scope that may or may not be true.

Good luck!


Final consideration I’d like to mention is that you should look to achieve the 3-2-1-1-0 best practice rule in your backups, I won’t reiterate it all here as here’s an awesome blog post by a Veeam employee. Crucially though if you need to retain data for 36 months, then it’s likely you’ll be in a lot of trouble if this falls short. Which is why it’s important to consider the multiple copies of data, off site requirements and ransomware resistance. Do give the article a read, it’ll give you a lot more considerations to make sure you address any assumptions that are being made about your data protection scope that may or may not be true.

Great post @MicoolPaul

Especially the quoted part is most important and also came to my mind. You need to ensure that you create secondary backup copies which are offsite, offline, secure or whatever in order to achieve your long term retention goals. So 3-2-1 or 3-2-1-1-0 is what you should look at.


@MicoolPaul and @regnor I would like to thanks your response, I have gone thru the article and will be considering it into our design now. One of my main concerns is how many jobs we need to create to achieve 36 months of retention with a backup job (increment) running every 1 hour per day. The File server is a Physical WIndows NAS Appliance. we are backing it using an agent.

 


@MicoolPaul and @regnor I would like to thanks your response, I have gone thru the article and will be considering it into our design now. One of my main concerns is how many jobs we need to create to achieve 36 months of retention with a backup job (increment) running every 1 hour per day. The File server is a Physical WIndows NAS Appliance. we are backing it using an agent.

 

Do you need 1 hour’s retention for 36 months? Or 1 hour RPO for last 30 days for example then dropping down to a lower frequency such as weekly/monthly?


Hey, so your retention for the VM will depend on which type of backup chain you use, reverse or forward incremental. Any solution can give you the “at least” 2 restore points, there are a few considerations around storage performance but I won’t dilute my reply here going on a tangent. For your reading here’s a link to backup methods. It’s for vSphere but it’ll work the same for Hyper-V as it’s nothing to do with the hypervisor.

 

I would recommend using a modern file system such as ReFS or XFS to make use of Veeam’s Fast Clone technology if you decide to use synthetic fulls (create a new chain weekly as a full independent backup but using your existing backups as a source to fetch existing data from) or active fulls (create a new chain based on reading all data from your production again). ReFS is for Windows and XFS is Linux, further reading on Fast Clone here and the requirements for each system.

 

As for your file server that’ll depend on whether the file server is an actual server or a NAS (assuming a server) and whether it’s a VM or not (as you’ve mentioned it independently I’m assuming not). If it’s physical you’ll need to use Veeam Agent, if it’s in the cloud you should use the relevant Veeam for <public cloud here> product, if it’s a VM or NAS then use Veeam Backup & Replication (you could also manage the Veeam Agent job from this product anyway for single pane of glass management). Providing your production and backup infrastructure are both fast enough this is easily achievable.

 

Final consideration I’d like to mention is that you should look to achieve the 3-2-1-1-0 best practice rule in your backups, I won’t reiterate it all here as here’s an awesome blog post by a Veeam employee. Crucially though if you need to retain data for 36 months, then it’s likely you’ll be in a lot of trouble if this falls short. Which is why it’s important to consider the multiple copies of data, off site requirements and ransomware resistance. Do give the article a read, it’ll give you a lot more considerations to make sure you address any assumptions that are being made about your data protection scope that may or may not be true.

Good luck!

Excellent response man!!! Thanks. Making sure we have an immutable copy is the key for today's world.


@DeepC I *highly* recommend watching Tim Smith’s Proxy/Repo sizing session at the VeeamON online conference happening this Tue-Wed (25-26 May). He goes into explicit details about sizing, which would be a great “piggy-back” off of what MicooPaul shared above.

Cheers!


Hi @DeepC, @MicoolPaul mentioned a very important question. Do you need 1 hour’s retention for the whole 36 months? Or is it sufficient to use a lower frequency after a day or a week? I would recommend the last one. First of all : 999 is the maximum number of restore points in a regular job not using GFS (weekly, monthly, yearly). So if you need 1 hour’s retention, you need 24 restore points a day. I would not recommend to use a chain that is too long. So in that case I would recommend a full backup every day. Weekly would mean a chain of 1 full and at least 6 x 24 incremental backups in one chain : not recommended! So what i should suggest : set the number of restore points to 24 x 14 days = 336 RP, so you can restore to hour-basis for 2 weeks. Take every day a synthetic full backup (make sure to use REFS/XFS with fast cloning - otherwise the needed storage will be huge!), activate GFS for weekly (number of restore points 52 x 3 : so you will have 156 weekly RPs = 36 months). I think that will more or less satisfied to your needs. If you have enough storage I would even recommended to take weekly an active full (so you will have a same datablock more than once on your repository). With synthetic full backup it’s pointing to existing datablocks on your repository.


@DeepC I *highly* recommend watching Tim Smith’s Proxy/Repo sizing session at the VeeamON online conference happening this Tue-Wed (25-26 May). He goes into explicit details about sizing, which would be a great “piggy-back” off of what MicooPaul shared above.

Cheers!

As noted by @coolsport00 this is one of the best sessions to attend for sizing. I attended it last year and will again this year for any changes in v11. Also be sure to check out the Best Practices website as well which outlines this information as well - https://bp.veeam.com/vbr

The Design section covers Proxy and Repos.


Comment