Skip to main content

Hello guys 😄

Hope everyone are well ! In my side I'm in vacation 🤙. I wanted to share with you some tests done with the beta of Veeam v12 (build 12.0.0.817) and the incredible new feature backup directly to Object Storage.

Object Storage
For my test I used Wasabi as cloud object storage. You can find really great posts about Wasabi:

Wasabi is now available in the list of providers when you want to add an object storage repository

We have less information to enter, it's appreciable

 

You will find your repository with Wasabi Cloud Storage Type.

 

Backup Job

The advantages of this kind of backup are multiple:

  • Financial: you can use on-prem object storage, object storage can be cheaper than SAN
  • Built-in durability & reliability
  • Reduce hardware to manage

Let's take a look at the setup, because there are technical side effects.

As usual you select your repository, the GFS part still available and you could configure a backup copy job if necessary

In the advanced options, it’s simplified ! Bye backup modes, you just have the possibility to create an “active full”

It's due to how object storage works. I haven't test to configure the active full option but I presume that the job will transfer all blocks and not only the news ones.
I don’t know why the synthetic mode is not available, like GFS points are still possible.
 

The defragment compact full backup file periodically option disappear too

After that my job ran I thought to find my backup under “Object Storage”, but they are in the “Disk” section. Maybe this will change in the release version.

Backup Copy

Here we don’t have a lot of difference, the two copy mode are available (mirroring/pruning)

I just test to backup copy an object storage backup job to another bucket and it works perfectly.

As for the backup job, the files are visible under “Disk (copy)”

Use-cases

There are multiple, here some ideas:

  • Backup to on prem object storage
  • Backup copy jobs to object storage, I ll probably review for some of my customer the externalization to object storage with backup copy jobs. With a SOBR it's not possible to change the retention configured in the backup job so if you have a long retention on-premise you will have the same in S3. Backup copy job direct to S3 will simplify some architecture and in bonus the GFS points ll be normally protected for all their retention period.
  • Agents running in Cloud or in home-office
  • Backup direct to cloud for archival
  • Backup configuration to cloud. Protecting the backup configuration is really important, I always recommend regularly exporting to an offline media this backup.
    With the possibility to backup it directly to the cloud we profit of durability and reliability (the "11 nines"). A backup copy task for this job for me is still missing

Don’t hesitate to share some other ideas!

Support

It will be possible to use direct to object storage for:

  • Virtual machines backup (VMware,HyperV,AHV...)
  • NAS backup
  • Backup copy
  • SQL, Oracle
  • Configuration backup
  • Veeam agent

Bugs?

Currently when I want to delete a backup stored in the object storage, the task run indefinitely. The files is deleted but I have to close/reopen the console to the the task in success.

Conclusion

A real game changer, this will allow architects to have many more design options. This will simplified some requests, in particular the fact to avoid the use of SOBR to externalize.

Do you expect this functionality and if so for what needs?

Cheers!

I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

After you mentioned SOBR, this seems to what I would like to do. Is their a Step-by-Step guide that you can point me  to on how to set a Performance Tier = Capacity Tier = Archive Tier? I can’t find much information online regarding offloading data from Veeam SOBR to AWS Glacier Deep Archive/


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

After you mentioned SOBR, this seems to what I would like to do. Is their a Step-by-Step guide that you can point me  to on how to set a Performance Tier = Capacity Tier = Archive Tier? I can’t find much information online regarding offloading data from Veeam SOBR to AWS Glacier Deep Archive/

Hello @vane 

For implement Archive Tier you need an Amazon S3 Glacier storage or Azure Archive.
You could find more information here: 
https://helpcenter.veeam.com/docs/backup/vsphere/archive_tier.html?ver=110

And here a link how to set up a SOBR:
https://helpcenter.veeam.com/docs/backup/vsphere/sobr_add.html?ver=110

 


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

After you mentioned SOBR, this seems to what I would like to do. Is their a Step-by-Step guide that you can point me  to on how to set a Performance Tier = Capacity Tier = Archive Tier? I can’t find much information online regarding offloading data from Veeam SOBR to AWS Glacier Deep Archive/

Hello @vane 

For implement Archive Tier you need an Amazon S3 Glacier storage or Azure Archive.
You could find more information here: 
https://helpcenter.veeam.com/docs/backup/vsphere/archive_tier.html?ver=110

And here a link how to set up a SOBR:
https://helpcenter.veeam.com/docs/backup/vsphere/sobr_add.html?ver=110

 

Thanks.

Also, if I setup SOBR in Veeam using Exagrid as a Performance Tier, and if I needed to delete data on the Landing Zone in Exagrid to free up space, will it delete the data in AWS S3 and/or Glacier as well?


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

After you mentioned SOBR, this seems to what I would like to do. Is their a Step-by-Step guide that you can point me  to on how to set a Performance Tier = Capacity Tier = Archive Tier? I can’t find much information online regarding offloading data from Veeam SOBR to AWS Glacier Deep Archive/

Here is both the help guide and Best Practice for SOBR.

Scale-Out Backup Repository - User Guide for VMware vSphere (veeam.com)

Scale-Out Repos - Veeam Backup & Replication Best Practice Guide


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

After you mentioned SOBR, this seems to what I would like to do. Is their a Step-by-Step guide that you can point me  to on how to set a Performance Tier = Capacity Tier = Archive Tier? I can’t find much information online regarding offloading data from Veeam SOBR to AWS Glacier Deep Archive/

Hello @vane 

For implement Archive Tier you need an Amazon S3 Glacier storage or Azure Archive.
You could find more information here: 
https://helpcenter.veeam.com/docs/backup/vsphere/archive_tier.html?ver=110

And here a link how to set up a SOBR:
https://helpcenter.veeam.com/docs/backup/vsphere/sobr_add.html?ver=110

 

Thanks.

Also, if I setup SOBR in Veeam using Exagrid as a Performance Tier, and if I needed to delete data on the Landing Zone in Exagrid to free up space, will it delete the data in AWS S3 and/or Glacier as well?

I m not sure to really understand.
In your SOBR backup are copy or move or both  to you capacity tier following your configuration. In the Archive tier you only have GFS backup files older than a defined number of days move from your capacity tier to archive tier. 
Have you enable immuability in your bucket?

Also your oldest backup files (GFS point) are not necessarily in the landing zone but in the retention zone of your Exagrid.


Backup copy to another bucket in other region ?  :D

I was implying direct to object storage. Sure, adding a copy job helps, but then that is not too much different than a backup copy to object storage 🙂. I can already do that.

 

I guess if you backup direct to object storage, the copy to another bucket, it could potentially eliminate the need for on prem storage. That is a hugely added bonus for some.  I like my fast restores though and as a storage guy I enjoy having my own disk.

 

I also find some vendors, if you want to get OUT of their storage would cost an insane amount. If it’s archive data, I tend to put some of it to tape and have that data in object storage as well. I use the object storage for restores as I can still extract app aware files without pulling in a whole VM. If I was to switch clouds, I own the tape and it would be free to stage it somewhere and upload somewhere else. 

Well - correct.  backup direct to object in the cloud does have some more architectural questions around it.  Like fast restores and where to put the second copy.  For the second copy I guess a backup copy to object in different region in a different store account will work - but I don’t think it is aas good as backup to a hardened repo then a BCJ to object in the cloud.  

 

You could also make that first copy to a on-premesis object store and then do a BCJ out to the cloud.  If an end user has the correct S3 compatible object lock storage then you can have an immutable on-prem copy with out having to learn Linux. 😉


Needs clarification

To create a Scale-Out Backup Repository (SOBR) with a Archive Tier, Am I required to have a EC2 Instance Proxy? I think it does, but do I create it or does the Veeam application creates when I configure my backups for retention?

https://helpcenter.veeam.com/docs/backup/vsphere/glacier_proxy_appliance.html?ver=110

 

 


Needs clarification

To create a Scale-Out Backup Repository (SOBR) with a Archive Tier, Am I required to have a EC2 Instance Proxy? I think it does, but do I create it or does the Veeam application creates when I configure my backups for retention?

https://helpcenter.veeam.com/docs/backup/vsphere/glacier_proxy_appliance.html?ver=110

 

 

Yes for the Archive tier we need a server with EBS disks.  That is a”helper server” that is used to change the block size of what’s stored in S3 to match the recommended block size in Glacier.


Needs clarification

To create a Scale-Out Backup Repository (SOBR) with a Archive Tier, Am I required to have a EC2 Instance Proxy? I think it does, but do I create it or does the Veeam application creates when I configure my backups for retention?

https://helpcenter.veeam.com/docs/backup/vsphere/glacier_proxy_appliance.html?ver=110

 

 

Yes for the Archive tier we need a server with EBS disks.  That is a”helper server” that is used to change the block size of what’s stored in S3 to match the recommended block size in Glacier.

Ok. So Veeam, the application itself will create the EC2 instance, but is that additional cost - moving data from Capacity Tier to Archive Tier? I was told, if I were to setup an AWS Virtual Tape Gateway or Starwinds VTL, set the necessary settings within the applications to move data from S3 to Archive Tier, it’s free. Is that correct?


Needs clarification

To create a Scale-Out Backup Repository (SOBR) with a Archive Tier, Am I required to have a EC2 Instance Proxy? I think it does, but do I create it or does the Veeam application creates when I configure my backups for retention?

https://helpcenter.veeam.com/docs/backup/vsphere/glacier_proxy_appliance.html?ver=110

 

 

Yes for the Archive tier we need a server with EBS disks.  That is a”helper server” that is used to change the block size of what’s stored in S3 to match the recommended block size in Glacier.

Ok. So Veeam, the application itself will create the EC2 instance, but is that additional cost - moving data from Capacity Tier to Archive Tier? I was told, if I were to setup an AWS Virtual Tape Gateway or Starwinds VTL, set the necessary settings within the applications to move data from S3 to Archive Tier, it’s free. Is that correct?

There will be a cost but minimal (I think). See link below.

https://forums.veeam.com/object-storage-f52/glacier-specific-archive-tier-questions-t72453.html


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

After you mentioned SOBR, this seems to what I would like to do. Is their a Step-by-Step guide that you can point me  to on how to set a Performance Tier = Capacity Tier = Archive Tier? I can’t find much information online regarding offloading data from Veeam SOBR to AWS Glacier Deep Archive/

Hello @vane 

For implement Archive Tier you need an Amazon S3 Glacier storage or Azure Archive.
You could find more information here: 
https://helpcenter.veeam.com/docs/backup/vsphere/archive_tier.html?ver=110

And here a link how to set up a SOBR:
https://helpcenter.veeam.com/docs/backup/vsphere/sobr_add.html?ver=110

 

Thanks.

Also, if I setup SOBR in Veeam using Exagrid as a Performance Tier, and if I needed to delete data on the Landing Zone in Exagrid to free up space, will it delete the data in AWS S3 and/or Glacier as well?

I m not sure to really understand.
In your SOBR backup are copy or move or both  to you capacity tier following your configuration. In the Archive tier you only have GFS backup files older than a defined number of days move from your capacity tier to archive tier. 
Have you enable immuability in your bucket?

Also your oldest backup files (GFS point) are not necessarily in the landing zone but in the retention zone of your Exagrid.

I’m not even sure Exagrid supports S3 natively yet like VAST Data or Wasabi does.


Comment