Solved

How to solve this 3-2-1 rule?


How would you configure the rule for this infrastructure and limitations.

Infrastructure:

HPE Simplivity - 4 nodes

  • Running Policy-based backups for near line native restores.

Veeam Backup & Replication 11a

  • Veeam Backup Server is a VM
  • Veeam Backup Server is a Proxy Server
  • Additional VM running as a Proxy Server
  • 1x HPE StoreOnce 3650 33TB as the backup target.
  • Wasabi for S3 Storage (25TB capacity)
  • (No physical server in the environment)
  • Enterprise license edition

I would like to use the StoreOnce on-premise as my long and deep backup storage say 6 months and only use Wasabi for disaster with say 1 month - 2 month retention on there.

Have Production VMs that want offsite, Dev and QAS can stay onsite.

If I only use a Backup Job with SOBR for offsite VM’s how do I get that level of retention onsite and that level offsite.

Or do I have to stretch to BCJ with SOBR, but that increases data stored on the StoreOnce as near two copies or data stored.

 

Thanks. Appreciate any insight.

 

icon

Best answer by Chris.Childerhose 31 August 2022, 17:36

View original

11 comments

Userlevel 7
Badge +20

Well you can set up SOBR to use with Wasabi as that is the only way in v11a to use S3 storage right now (v12 will allow direct to Object).  Then you should set up your retention policy with the required restore points and also set up GFS for your jobs as well to ensure the longer retention is kept as noted.  Also be sure to set up the Move policy for the SOBR to ensure the required backup points are moved out to Wasabi as needed.

You will also need to ensure the job gets configured with the require Synthetic or Full backup at the required time to close the chain and ensure it is moved out to Wasabi.

This should help for configuration of Capacity Tier - Capacity Tier - User Guide for VMware vSphere (veeam.com)

Userlevel 7
Badge +20

There is also Best Practices for StoreOnce configurations as well - StoreOnce - Veeam Backup & Replication Best Practice Guide

Userlevel 7
Badge +11

Do you have a catalyst license for your StoreOnce?

Userlevel 7
Badge +8

Hi there!
I would definitely follow up @Chris.Childerhose recommendations!
Second, in my own experience with StoreOnce, deduplication is great, 1:20 in a good scenario.
But, Keep in mind two things,
First: Deduplication is per Catalyst Repository, so to get a better Dedup ratio, you better use one repo with all same type of data (windows vms backups for example) to get the better ratio.

Second: there is two options when setting un the StoreOnce, de duplication before or after storing into the appliance. If your appliance is Physical, no problem at all, it will have enough compute and memory to handle the task, but keep this in mind if you setup a StoreOnce virtual Appliance.

hopefully helps.

Userlevel 7
Badge +20

Hi there!
I would definitely follow up @Chris.Childerhose recommendations!
Second, in my own experience with StoreOnce, deduplication is great, 1:20 in a good scenario.
But, Keep in mind two things,
First: Deduplication is per Catalyst Repository, so to get a better Dedup ratio, you better use one repo with all same type of data (windows vms backups for example) to get the better ratio.

Second: there is two options when setting un the StoreOnce, de duplication before or after storing into the appliance. If your appliance is Physical, no problem at all, it will have enough compute and memory to handle the task, but keep this in mind if you setup a StoreOnce virtual Appliance.

hopefully helps.

Thanks for the mention and added tips. 👍

Thanks for the responses.


Let me provide additional information...

  • The site with the infrastructure above is live with backups completing to the on premise backup StoreOnce appliance (which we have a catalyst license)
  • All VM Backups (Production and Dev/QAS) are stored on a single Catalyst Volume configured as a Veeam Backup Repository.
  • Have separate Backup Jobs for Production VMs and Dev/QAS

Production VM backup job details
Schedule: every day 
Retention: 30 days
Synthetic Full: Weekend
GFS: 8W, 6M
Destination: StoreOnce Catalyst (On-premise)

Requirement:
Want to offload production VM Backups Wasabi (License for 25 TB S3 storage)

Production VM Stats:

  • 4 TB of Full VM data to offload.
  • 680 GB of daily avg change.

How do I control storing a limited number of backups on Wasabi from my original source backups.
I do not want to store 6 months worth on Wasabi - perhaps 2 months.

How can I achieve this please.

Userlevel 7
Badge +20

Thanks for the responses.


Let me provide additional information...

  • The site with the infrastructure above is live with backups completing to the on premise backup StoreOnce appliance (which we have a catalyst license)
  • All VM Backups (Production and Dev/QAS) are stored on a single Catalyst Volume configured as a Veeam Backup Repository.
  • Have separate Backup Jobs for Production VMs and Dev/QAS

Production VM backup job details
Schedule: every day 
Retention: 30 days
Synthetic Full: Weekend
GFS: 8W, 6M
Destination: StoreOnce Catalyst (On-premise)

Requirement:
Want to offload production VM Backups Wasabi (License for 25 TB S3 storage)

Production VM Stats:

  • 4 TB of Full VM data to offload.
  • 680 GB of daily avg change.

How do I control storing a limited number of backups on Wasabi from my original source backups.
I do not want to store 6 months worth on Wasabi - perhaps 2 months.

How can I achieve this please.

The issue with Capacity Tier is there is no control of the retention points that get offloaded.  It offloads chains that are complete and inactive.

You will have a solution for this when v12 comes out going direct to Object.  As a workaround the only other suggestion that I can think of would be -

  1. Build a new SOBR with Wasabi Capacity tier
  2. Set up a new job (clone) and then set the GFS retention to what you want to see offloaded
  3. Run this job to do the offloading to Wasabi and keep the regular job above to just the local storage
Userlevel 7
Badge +8

Thanks for the responses.


Let me provide additional information...

  • The site with the infrastructure above is live with backups completing to the on premise backup StoreOnce appliance (which we have a catalyst license)
  • All VM Backups (Production and Dev/QAS) are stored on a single Catalyst Volume configured as a Veeam Backup Repository.
  • Have separate Backup Jobs for Production VMs and Dev/QAS

Production VM backup job details
Schedule: every day 
Retention: 30 days
Synthetic Full: Weekend
GFS: 8W, 6M
Destination: StoreOnce Catalyst (On-premise)

Requirement:
Want to offload production VM Backups Wasabi (License for 25 TB S3 storage)

Production VM Stats:

  • 4 TB of Full VM data to offload.
  • 680 GB of daily avg change.

How do I control storing a limited number of backups on Wasabi from my original source backups.
I do not want to store 6 months worth on Wasabi - perhaps 2 months.

How can I achieve this please.

The issue with Capacity Tier is there is no control of the retention points that get offloaded.  It offloads chains that are complete and inactive.

You will have a solution for this when v12 comes out going direct to Object.  As a workaround the only other suggestion that I can think of would be -

  1. Build a new SOBR with Wasabi Capacity tier
  2. Set up a new job (clone) and then set the GFS retention to what you want to see offloaded
  3. Run this job to do the offloading to Wasabi and keep the regular job above to just the local storage

The idea looks great, I was thinking, in storing just the latest week in wasabi, as a DR out of the infra copy, with a copy job of your tasks, keeping a Synthetic full and 6 incremental, and then you can start playing with retentions, number of copies, etc.

Also Keep in mind that, even if you delete something from Wasabi, the used space will remain in your bill for the last 90 days (Im not sure at all, in Spain there is a provider that keeps it for 60 days).

But is a very good S3 storage vendor, and cheaper that all around the internet.

Cheers.

Userlevel 7
Badge +20

Thanks for the responses.


Let me provide additional information...

  • The site with the infrastructure above is live with backups completing to the on premise backup StoreOnce appliance (which we have a catalyst license)
  • All VM Backups (Production and Dev/QAS) are stored on a single Catalyst Volume configured as a Veeam Backup Repository.
  • Have separate Backup Jobs for Production VMs and Dev/QAS

Production VM backup job details
Schedule: every day 
Retention: 30 days
Synthetic Full: Weekend
GFS: 8W, 6M
Destination: StoreOnce Catalyst (On-premise)

Requirement:
Want to offload production VM Backups Wasabi (License for 25 TB S3 storage)

Production VM Stats:

  • 4 TB of Full VM data to offload.
  • 680 GB of daily avg change.

How do I control storing a limited number of backups on Wasabi from my original source backups.
I do not want to store 6 months worth on Wasabi - perhaps 2 months.

How can I achieve this please.

The issue with Capacity Tier is there is no control of the retention points that get offloaded.  It offloads chains that are complete and inactive.

You will have a solution for this when v12 comes out going direct to Object.  As a workaround the only other suggestion that I can think of would be -

  1. Build a new SOBR with Wasabi Capacity tier
  2. Set up a new job (clone) and then set the GFS retention to what you want to see offloaded
  3. Run this job to do the offloading to Wasabi and keep the regular job above to just the local storage

The idea looks great, I was thinking, in storing just the latest week in wasabi, as a DR out of the infra copy, with a copy job of your tasks, keeping a Synthetic full and 6 incremental, and then you can start playing with retentions, number of copies, etc.

Also Keep in mind that, even if you delete something from Wasabi, the used space will remain in your bill for the last 90 days (Im not sure at all, in Spain there is a provider that keeps it for 60 days).

But is a very good S3 storage vendor, and cheaper that all around the internet.

Cheers.

You cannot copy direct to Wasabi that is the issue right now and why you need SOBR to offload.  Once v12 comes sure you could do a copy job out to Wasabi with the retention required. 😉

Userlevel 7
Badge +8

Thanks for the responses.


Let me provide additional information...

  • The site with the infrastructure above is live with backups completing to the on premise backup StoreOnce appliance (which we have a catalyst license)
  • All VM Backups (Production and Dev/QAS) are stored on a single Catalyst Volume configured as a Veeam Backup Repository.
  • Have separate Backup Jobs for Production VMs and Dev/QAS

Production VM backup job details
Schedule: every day 
Retention: 30 days
Synthetic Full: Weekend
GFS: 8W, 6M
Destination: StoreOnce Catalyst (On-premise)

Requirement:
Want to offload production VM Backups Wasabi (License for 25 TB S3 storage)

Production VM Stats:

  • 4 TB of Full VM data to offload.
  • 680 GB of daily avg change.

How do I control storing a limited number of backups on Wasabi from my original source backups.
I do not want to store 6 months worth on Wasabi - perhaps 2 months.

How can I achieve this please.

The issue with Capacity Tier is there is no control of the retention points that get offloaded.  It offloads chains that are complete and inactive.

You will have a solution for this when v12 comes out going direct to Object.  As a workaround the only other suggestion that I can think of would be -

  1. Build a new SOBR with Wasabi Capacity tier
  2. Set up a new job (clone) and then set the GFS retention to what you want to see offloaded
  3. Run this job to do the offloading to Wasabi and keep the regular job above to just the local storage

The idea looks great, I was thinking, in storing just the latest week in wasabi, as a DR out of the infra copy, with a copy job of your tasks, keeping a Synthetic full and 6 incremental, and then you can start playing with retentions, number of copies, etc.

Also Keep in mind that, even if you delete something from Wasabi, the used space will remain in your bill for the last 90 days (Im not sure at all, in Spain there is a provider that keeps it for 60 days).

But is a very good S3 storage vendor, and cheaper that all around the internet.

Cheers.

You cannot copy direct to Wasabi that is the issue right now and why you need SOBR to offload.  Once v12 comes sure you could do a copy job out to Wasabi with the retention required. 😉

I omit that there is a need to store before moving up, right, 
the V12 feature will clean a lot being able to upload directly to the cloud.

thanks for the correction. 
You will definetly need a SOBR to upload / offload, 
but can be a different one from the prod you have right now, right?
he can make a new repo, SOBR and then configure the backup copy and the upload to wasabi.

cheers.

Userlevel 7
Badge +20

Thanks for the responses.


Let me provide additional information...

  • The site with the infrastructure above is live with backups completing to the on premise backup StoreOnce appliance (which we have a catalyst license)
  • All VM Backups (Production and Dev/QAS) are stored on a single Catalyst Volume configured as a Veeam Backup Repository.
  • Have separate Backup Jobs for Production VMs and Dev/QAS

Production VM backup job details
Schedule: every day 
Retention: 30 days
Synthetic Full: Weekend
GFS: 8W, 6M
Destination: StoreOnce Catalyst (On-premise)

Requirement:
Want to offload production VM Backups Wasabi (License for 25 TB S3 storage)

Production VM Stats:

  • 4 TB of Full VM data to offload.
  • 680 GB of daily avg change.

How do I control storing a limited number of backups on Wasabi from my original source backups.
I do not want to store 6 months worth on Wasabi - perhaps 2 months.

How can I achieve this please.

The issue with Capacity Tier is there is no control of the retention points that get offloaded.  It offloads chains that are complete and inactive.

You will have a solution for this when v12 comes out going direct to Object.  As a workaround the only other suggestion that I can think of would be -

  1. Build a new SOBR with Wasabi Capacity tier
  2. Set up a new job (clone) and then set the GFS retention to what you want to see offloaded
  3. Run this job to do the offloading to Wasabi and keep the regular job above to just the local storage

The idea looks great, I was thinking, in storing just the latest week in wasabi, as a DR out of the infra copy, with a copy job of your tasks, keeping a Synthetic full and 6 incremental, and then you can start playing with retentions, number of copies, etc.

Also Keep in mind that, even if you delete something from Wasabi, the used space will remain in your bill for the last 90 days (Im not sure at all, in Spain there is a provider that keeps it for 60 days).

But is a very good S3 storage vendor, and cheaper that all around the internet.

Cheers.

You cannot copy direct to Wasabi that is the issue right now and why you need SOBR to offload.  Once v12 comes sure you could do a copy job out to Wasabi with the retention required. 😉

I omit that there is a need to store before moving up, right, 
the V12 feature will clean a lot being able to upload directly to the cloud.

thanks for the correction. 
You will definetly need a SOBR to upload / offload, 
but can be a different one from the prod you have right now, right?
he can make a new repo, SOBR and then configure the backup copy and the upload to wasabi.

cheers.

Yes that is correct and the steps I outlined.  😉

Comment