Skip to main content

Veeam Backup & Replication v12.1 + Spectra BlackPearl = On-Prem Archive Tier

 

Veeam Backup & Replication (VBR) v12.1 is loaded with many great new features and enhancements.  One of the new features is the extension of the Archive Tier to on-premises storage solutions that are Amazon S3 Glacier compatible.  Spectra’s BlackPearl S3 Hybrid Cloud Storage is one of those solutions.

But what is the Archive tier?  It was first introduced in Veeam Backup & Replication v11 back in February of 2021.  The Archive tier is an additional tier of storage that can be attached to a scale-out backup repository (SOBR).  You can transport applicable data to the archive tier for archive and cost saving purposes.

The following types of backups files are eligible to be sent to the archive tier:

  • GFS backups
    • Orphaned GFS backups
    • Veeam Backup for Red Hat Virtualization GFS backups
  • VeeamZip backups
  • Exported backups
  • Kasten K10 backups

 

For the purpose of this blog I will be using backup files with GFS flags.  The GFS backups that I will be archiving are weekly full backups.

Prior to VBR 12.1 the Archive Tier options were Amazon S3 Glacier and Microsoft Azure Archive Storage.  But now with VBR 12.1, you can take advantage of the benefits of “cold” storage while keeping your data on-premises.

The following diagram illustrates the different tiers of a Scale-out Backup Repository that are available in VBR 12.:

Diagram of Scaled-out Backup Repository tiers (VBR v12.1)

 

 

BlackPearl Steps:

Let’s walk through how to set up an on-prem Archive Tier using BlackPearl.  The first step is creating the required buckets.  In this example, I will be using BlackPearl S3 Hybrid Cloud Storage for all three of my repository tiers (Performance, Capacity, and Archive).

You can see from the BlackPearl interface that I created my three buckets and using my “wicked awesome” naming conventions, you can clearly see the intended use cases for the buckets:

BlackPearl Buckets

 

Now, how does Veeam Backup & Replication know which bucket is Amazon S3 Glacier compatible and is eligible to be an archive tier?  If you have read any of my previous “Veeam Amazing Object Storage Tips & Techniques” blogs, you can probably guess what the correct answer is.

The Smart Object Storage API (SOSAPI) is the mechanism where the bucket, in this case “black-pearl-archive-tier”, is enabled to be an Archive Tier.  In VBR v12.1 we added a new parameter to the SOSAPI that an object storage provider like Spectra can flag a bucket to note that it is capable of being used as a target for the Archive Tier.

VBR Steps:

Now we can hop into the VBR Console to set up the backup infrastructure to use these new BlackPearl buckets.

For the sake of keeping the blog under the maximum character count, I am going to assume you know how to create an object storage repository and I will focus on the archive tier repository steps.  If you need a refresher or are new to creating object storage repositories, our HelpCenter has all the information that you need regarding Object Storage Repositories.

 

From the VBR console select “Add Repository”, choose “Object storage” (note: same steps for Performance and Capacity tiers):

Adding Object Storage Repository

On the next screen you choose “S3 Compatible” (note: same step for Performance and Capacity tiers): 

Selecting S3 Compatible Repository

This step is where we diverge from the path we would take for the Performance and Capacity tiers.  For those repositories we would select “S3 Compatible” but to create the Archive Tier repository we now select “S3 Compatible with Data Archiving”:

Archive Tier Repository

Notice the icon at the top of the next screenshot shown below.  It shows a blue snowflake.  This indicates that we are creating an Archive Tier repository.  On this screen we need to enter the “Service point”, “Region”, and “Credentials” for the bucket we are going to use.  This is the same information you need for the Performance and Capacity tier buckets as well.  The difference on this screen from those other tiers is the “Archiver appliance” section.  For the Performance and Capacity tiers you need to specify the “Connection mode”:

Archive Tier Repository Settings

The Archiver appliance is a service that runs on the server you select.  I am using the VBR server in my example.  The Archiver appliance’s purpose is to build the objects that will put to the cold storage that comprise the GFS backup that is being archived.  The source objects for this example will come from the Capacity tier, but you can archive data directly from the Performance tier as well as long as the Performance tier uses object storage backed extents.

Now that I have my repositories created for all three tiers, I will now create a Scaled-out Repository (SOBR) which is required for the Archive Tier.  I am again going to assume you know how to create a SOBR but if you don’t HelpCenter is a great resource for Scale-Out Backup Repositories.

 

Within the SOBR wizard you will see the Archive Tier option.  When you get to that step you need to enable the Archive Tier for archiving GFS backups via checking the “Archive GFS full backups to object storage” checkbox.  Once you do that you can select the appropriate Archive Tier repository.  The next step is to choose which GFS backups will be archived based on age:

Archive Tier Settings

The default is to archive GFS backups that are older than 90 days.  But as I noted earlier, I am impatient, so I set the value to 0 days.  That way the backups I create today will be eligible to be archived today.  Also, click on the “Storage” button and deselect any checked boxes on the “Storage Settings” screen.

Now you can see my completed SOBR “BlackPearl S3 SOBR” and the three repository tiers.  Notice the Archive Tier repository has the distinctive blue snowflake to indicate “cold” storage:

SOBR Repository Tiers

The next step is to create a new backup job that will use my newly created SOBR so that we can archive some GFS weekly full backups to BlackPearl’s cold storage via our Archive Tier.

Within the job’s configuration settings you can configure the GFS backups for archival purposes.  The 1st step is to check the “Keep certain full backups longer for archival purposes”.  Then we need to click on the “Configure” button to configure our GFS setting on the pop up window.  In my example I chose to keep my weekly full backups for 1 week and that they will be created on Monday:

GFS Settings

I ran an initial full backup (.vbk) and then 4 incremental backups (.vib).  You can see the full backup chain here:

Backup Chain with Weekly GFS

Notice that the full backup is marked with “W” indicating that it is a Weekly GFS backup.

 

Now that I have my backup chain created, I need to close the chain by creating a new full backup.  Only after the current backup chain is “closed” will the GFS backup get archived.

For the purposes of this blog I will manually initiate an Active Full backup.  To initiate an Active Full backup you can either click on the Active Full button on the toolbar or you can <right-click> on the backup job and select “Active full” from there:

 

Closed Backup Chain

Now the GFS backup is eligible to be archived.  This can be done manually running the tiering job or wait for the system offload job to run.  By default the system generated one runs every 4 hours and as you know by now I am impatient, so you know what option is next.

No surprise, we will do it manually by using “<ctl> + <right-click>” on the SOBR name and selecting “Run tiering job now”:

Manually Running of Tiering Job

Once the tiering job finishes you should see the archive offload job start running:

Archive Tiering Job

We can now see in the VBR console the Archive Tier along with the backup that resides in the Archive Tier and what repository is being used:

Archive Tier via VBR Console

Now if we check the Archive Tier bucket “black-pearl-archive-tier” and look at the objects, we will see the storage class is “DEEP_ARCHIVE”, which when using BlackPearl indicates the objects are in their “cold” storage:

Deep_Archive Objects

BlackPearl will now take these objects and place them on tape.  So the final resting place for their DEEP_ARCHIVE storage class will be tape.  And from the BlackPearl console I can see the tape(s) that my GFS weekly is stored on:

Tape used as Cold Storage

The combination of Veeam Backup & Replication v12.1 and Spectra BlackPearl S3 Hybrid Cloud Storage now will allow you to implement an Archive Tier.  You get the benefits of storing your GFS backups on “cold” storage while not incurring the costs of retrieving the data back from the Archive Tier as you do with some public cloud solutions.

If you have any questions, please reach out to me via the comments feature and I will do my best to answer them.

Actually never heard of that storage solution @SteveF 😊 Nice ‘how-to’ post on how to set it all up.


Yes very interesting solution and explanation for sure.  Thanks for sharing.


Actually never heard of that storage solution @SteveF 😊 Nice ‘how-to’ post on how to set it all up.

@coolsport00 Thanks!!!  Are you unfamilar with Spectra and/or their BlackPearl product?  Check out Spectra's Veeam Landing Page for more info about our partnership.  BlackPearl S3 has also been validated via the Veeam Ready Program.


Not all. Appreciate the refrence links!


Yes very interesting solution and explanation for sure.  Thanks for sharing.

Thanks @Chris.Childerhose.  We are constantly innovating and iterating here “Veeam Speed”, so there’s plenty for me to share.  If you have any ideas for future blogs, please reach out to me.  


Yes very interesting solution and explanation for sure.  Thanks for sharing.

Thanks @Chris.Childerhose.  We are constantly innovating and iterating here “Veeam Speed”, so there’s plenty for me to share.  If you have any ideas for future blogs, please reach out to me.  

I will for sure reach out.  More than like the new year as end of year is busy as always.  😋


Cool solution.  I will have to look into this a bit more when I have some time!

 


Amazing Steve!

Thanks for sharing!


Interesting article, this has given me a good trailer of what my configuration will look like, but with a different solution :)

I would have preferred to be able to choose move or copy maybe both like the archive tier.

Have you tested backup copy as well?

I’m curious, maybe i missed something why do you need many buckets as the storage class is different?

How long Blackpearl is waiting to send data to tape? is there a stagging zone?


Interesting article, this has given me a good trailer of what my configuration will look like, but with a different solution :)

I would have preferred to be able to choose move or copy maybe both like the archive tier.

Have you tested backup copy as well?

I’m curious, maybe i missed something why do you need many buckets as the storage class is different?

How long Blackpearl is waiting to send data to tape? is there a stagging zone?

@BertrandFR I am not sure I understand your question.  There is no option to copy/move to the Archive Tier.  Perhaps you meant the Capacity Tier?  If so, the two tiers don’t have the same use cases so they don’t have the same capabilities.  The Archive Tier’s use case is to store your GFS backups (weekly, monthly, yearly, etc...) on cold storage with the intention of you saving on the storage costs due to cold storage typically being the least expensive storage tier.

 

The GFS backups are moved from the source tier, so there is no “copy” option available.  If I had initiated another offload job, the weekly full would’ve been removed from the Capacity Tier.

 

In my example there were 3 buckets. 1 per tier (Performance, Capacity, and Archive).  If I used something other than object storage for the Performance Tier, I would’ve only used 2.

 

You can configure BlackPearl to determine when you move/copy the objects to tape.  I chose to do it immediately.  My reasoning was if I needed to do a restore the objects required for restoration were still in my perfomance tier as well as the capacity tier.  So if I needed the objects back quickly, I had 2 copies in more performantive repositiories than the cold storage tier.

 

Hope this answers your questions.


Hello, thank you for your complete answer @SteveF . Indeed i meant capacity tier like as you understand, sorry 😶

I’m completely aware of saving storage cost but it’s not my single consideration. I will be happy to have files on two differents media during a specific amount of time to meet internal security compliance requirements.

My use case will probably be:

centralise backup for several hundred ROBO.

backup copy => SOBR with performance tier on object storage/standard storage class and archive tier for storage class glacier

If you want, we can keep talking about it in PM to avoid flooding your great article :)


My use case will probably be:

centralise backup for several hundred ROBO.

backup copy => SOBR with performance tier on object storage/standard storage class and archive tier for storage class glacier

@BertrandFR, the Archive tier isn’t a copy of the Performance tier, however the Capacity tier with copy mode enabled is a copy of the Performance tier.  So to preserve the 3-2-1 rule, you shouldn’t go directly from the Performance tier to the Archive Tier.  Also, the source of the Archive tier needs to be object storage backed repositories.  If you decide to go to the Archive tier directly from the Performance tier, the Performance tier will need to be object storage backed.

My example of Performance tier → Capacity tier → Archive tier, is the best approach and also maintains the integrity of the 3-2-1 rule.  And if you use immutability in your tiers, then the 3-2-1-1-0 rule is satisfied.

Hope this helps.


That’s awesome, thank you so much!


My use case will probably be:

centralise backup for several hundred ROBO.

backup copy => SOBR with performance tier on object storage/standard storage class and archive tier for storage class glacier

@BertrandFR, the Archive tier isn’t a copy of the Performance tier, however the Capacity tier with copy mode enabled is a copy of the Performance tier.  So to preserve the 3-2-1 rule, you shouldn’t go directly from the Performance tier to the Archive Tier.  Also, the source of the Archive tier needs to be object storage backed repositories.  If you decide to go to the Archive tier directly from the Performance tier, the Performance tier will need to be object storage backed.

My example of Performance tier → Capacity tier → Archive tier, is the best approach and also maintains the integrity of the 3-2-1 rule.  And if you use immutability in your tiers, then the 3-2-1-1-0 rule is satisfied.

Hope this helps.

Hello,

Could you tell me why i shouldn’t do a move from performance tier on sobr with object storage on performance tier?

Is it a software limitation or related to best pratices?

Limitations for Archive Tier - User Guide for VMware vSphere (veeam.com)


Hello,

Could you tell me why i shouldn’t do a move from performance tier on sobr with object storage on performance tier?

Is it a software limitation or related to best pratices?

Limitations for Archive Tier - User Guide for VMware vSphere (veeam.com)

@BertrandFR the software will allow you to archive directly from the Performance tier when it is comprised of object storage. 

My intent was to bring to your attention that if you move the GFS backup from the Performance tier to the Archive tier, you will only have 1 copy of that GFS backup which doesn’t satisfy the 3-2-1 rule.  


Thank you for the useful experience and sharing @SteveF 
May I ask what the procedure is for restoring/recovering data from the archive level? Is it necessary to move to capacity level first?

As you said, the backup that is entered at the archive level is a full backup based on the GFS scenario that we created. My question is how do I create a scenario that in a year there are 4 full / GFS backups?
and what happened in the following year?


Thank you for the useful experience and sharing @SteveF 
May I ask what the procedure is for restoring/recovering data from the archive level? Is it necessary to move to capacity level first?

When you do the restore from the archive tier, VBR will handle all of the work to get the data restored for you.  You don’t need to do anything other than selecting what you want restored. Here are the instructions for Restoring from Archive Tier.

 

As you said, the backup that is entered at the archive level is a full backup based on the GFS scenario that we created. My question is how do I create a scenario that in a year there are 4 full / GFS backups?
and what happened in the following year?

You would just need to schedule within VBR the creation of the 4 GFS backups during the year based upon your requirements.  A great explanation of VBR’s GFS backups can be found here: VBR + GFS Backups

 

Hope this helps.

 

Steve


thanks for the explanation

I have tried a scenario like this:

- Setup retention for 7 days
- make weekly full gfs backups
- on scaleout configuratio  we make backup as copy (replication)
- in the config move to archive tier I filled it with the value 0 days

My question is, when will the backup be copied or replicated to the archive tier? because I've been waiting for days 7 - 10, no copies have entered the archive level

 

thanks 

best regards

Haykal


thanks for the explanation

I have tried a scenario like this:

- Setup retention for 7 days
- make weekly full gfs backups
- on scaleout configuratio  we make backup as copy (replication)
- in the config move to archive tier I filled it with the value 0 days

My question is, when will the backup be copied or replicated to the archive tier? because I've been waiting for days 7 - 10, no copies have entered the archive level

 

thanks 

best regards

Haykal

Check out this post on the community that should explain the Archive Tier and how it works.

SOBR: Veeam Archive Tier – Calculations and Considerations (in v11) | Veeam Community Resource Hub


Great Article. Just one thing so that it does not confuse folks only vSphere CSI provisioned volumes can be be sent from Kasten to a Veeam Repository. You would also still need another different type of location profile in Kasten for the metadata. . https://docs.kasten.io/latest/usage/configuration.html#veeam-repository-location


Comment