Skip to main content

Hello guys 😄

Hope everyone are well ! In my side I'm in vacation 🤙. I wanted to share with you some tests done with the beta of Veeam v12 (build 12.0.0.817) and the incredible new feature backup directly to Object Storage.

Object Storage
For my test I used Wasabi as cloud object storage. You can find really great posts about Wasabi:

Wasabi is now available in the list of providers when you want to add an object storage repository

We have less information to enter, it's appreciable

 

You will find your repository with Wasabi Cloud Storage Type.

 

Backup Job

The advantages of this kind of backup are multiple:

  • Financial: you can use on-prem object storage, object storage can be cheaper than SAN
  • Built-in durability & reliability
  • Reduce hardware to manage

Let's take a look at the setup, because there are technical side effects.

As usual you select your repository, the GFS part still available and you could configure a backup copy job if necessary

In the advanced options, it’s simplified ! Bye backup modes, you just have the possibility to create an “active full”

It's due to how object storage works. I haven't test to configure the active full option but I presume that the job will transfer all blocks and not only the news ones.
I don’t know why the synthetic mode is not available, like GFS points are still possible.
 

The defragment compact full backup file periodically option disappear too

After that my job ran I thought to find my backup under “Object Storage”, but they are in the “Disk” section. Maybe this will change in the release version.

Backup Copy

Here we don’t have a lot of difference, the two copy mode are available (mirroring/pruning)

I just test to backup copy an object storage backup job to another bucket and it works perfectly.

As for the backup job, the files are visible under “Disk (copy)”

Use-cases

There are multiple, here some ideas:

  • Backup to on prem object storage
  • Backup copy jobs to object storage, I ll probably review for some of my customer the externalization to object storage with backup copy jobs. With a SOBR it's not possible to change the retention configured in the backup job so if you have a long retention on-premise you will have the same in S3. Backup copy job direct to S3 will simplify some architecture and in bonus the GFS points ll be normally protected for all their retention period.
  • Agents running in Cloud or in home-office
  • Backup direct to cloud for archival
  • Backup configuration to cloud. Protecting the backup configuration is really important, I always recommend regularly exporting to an offline media this backup.
    With the possibility to backup it directly to the cloud we profit of durability and reliability (the "11 nines"). A backup copy task for this job for me is still missing

Don’t hesitate to share some other ideas!

Support

It will be possible to use direct to object storage for:

  • Virtual machines backup (VMware,HyperV,AHV...)
  • NAS backup
  • Backup copy
  • SQL, Oracle
  • Configuration backup
  • Veeam agent

Bugs?

Currently when I want to delete a backup stored in the object storage, the task run indefinitely. The files is deleted but I have to close/reopen the console to the the task in success.

Conclusion

A real game changer, this will allow architects to have many more design options. This will simplified some requests, in particular the fact to avoid the use of SOBR to externalize.

Do you expect this functionality and if so for what needs?

Cheers!

This is one of the best changes especially for us being an MSP we can move directly to object which allows better scaling and also billing will become easier as well. Billing with block cloning from ReFS and XFS is a challenge in the way each of VBR and VCC read data.

Already testing this stuff in homelab with Wasabi as well. 😎


I did a demo for Object First on Friday and I noticed the Wasabi option when they were creating a new object storage repo in their V12 beta and wondered what that was all about.  Thanks for posting this info...can’t wait to get my hands on this.

 

Derek

 


It’s a great feature, in my particular case, We didn’t have too much space to use as a “jump repository” to move data into an S3 repository for external backup.

Now we will have the possibility and as you mentioned, arquitectura of these will become easier and more flexible.

love this improvements!


I missed this earlier in the week but great post 👏 Particularly spelling out how creating a direct to object backup manipulates the available options for backup job configuration.


For the NAS Backups direct to an Object Storage: Will it be possible to set the NAS Backups as immutable (if Object Lock is supported by the Provider)?


Hello,

Yes in v12 any repository with immutability could be used for Nas  backup.


Nice!

Then I think I found a Solution to create a secured Backup (achieving the second 1 in 3-2-1-1-0) of a 200TB NetApp Filer ;D


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Thanks!


@Stabz This is a great feature but sending on-premesis backups to object storage have me a little concerned.  One issue I am afraid  that will crop up is bandwidth issues.  Users will need to make sure they have enough bandwidth and object inject speed to handle their backup traffic in their stated backup window.  

 

The second issue I worry about is that hope will think since they have a backup in the cloud they won’t need a second copy somewhere else.  I think that will be a big mistake.

 

Personally I am going to discuss this as a great on-premesis solution to provide immutability with the hassle of learning Linux - and may well be easier to maintain for some shops.  Then do a backup copy job to an off-site location.


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.


I’m currently using an Exagrid appliance as my backup repository. Can the “Direct Backup to Object Storage” be configured to auto-start as soon as the corresponding backup job completes?

I’m going to use AWS Glacier Deep Archive.

Are you copying from the Exagrid or doing a job direct to Object?  If so you can set the direct to Object job to run after the other backup job.  If using a SOBR with the Exagrid you can add object to Capacity tier and turn on Copy Mode.

Backup Job direct to Object (Exagrid). I’m not using SOBR.

I’m hoping the Backup Job finish it will then auto-start the “Direct Backup to Object Storage” to AWS Glacier Deep Archive. Is that possible?

Direct to Object means you can send a backup job or copy job directly to object storage.  So you need to set it up that way it's not automatic.

For clarification, I can create a “Backup Copy Job” that will perform a “Direct Backup to Object Storage” to AWS. Is that correct?

Yes that is correct. 

Another clarification, can I set the “Direct Backup to Object Storage” to offload the data to AWS after my Backup Job finish?

Edit: I found my answer from the link below, but can the data be saved onto disk as well and then offload the data to AWS, so my data can be at two places?

Veeam v12 Sneak Peek: direct backup to Object Storage • Nolabnoparty

 

It can if using a SOBR as I mentioned.  That is the way to accomplish this.

I’m not using SOBR. Then my only option is using the “Backup Copy Job” option. Correct?

Yes then you have it in two places. SOBR is one option that is all. BCJ will also work.

Thaks!


Such a handy feature, can’t wait for V12


@Stabz This is a great feature but sending on-premesis backups to object storage have me a little concerned.  One issue I am afraid  that will crop up is bandwidth issues.  Users will need to make sure they have enough bandwidth and object inject speed to handle their backup traffic in their stated backup window.  

 

The second issue I worry about is that hope will think since they have a backup in the cloud they won’t need a second copy somewhere else.  I think that will be a big mistake.

 

Personally I am going to discuss this as a great on-premesis solution to provide immutability with the hassle of learning Linux - and may well be easier to maintain for some shops.  Then do a backup copy job to an off-site location.

 

​ Hello @vmJoe  ! I agree, I don't  recommend to backup directly to cloud object storage for sure, we need a backup near the production to have the best performances in case of restore. ​

But in case you have an object storage appliance onpremise this will simplify a lot :D


It’s an added option, but backup to cloud only 1 spot would not follow best practices of 3-2-1

 

 


Backup copy to another bucket in other region ?  :D


Backup copy to another bucket in other region ?  :D

I was implying direct to object storage. Sure, adding a copy job helps, but then that is not too much different than a backup copy to object storage 🙂. I can already do that.

 

I guess if you backup direct to object storage, the copy to another bucket, it could potentially eliminate the need for on prem storage. That is a hugely added bonus for some.  I like my fast restores though and as a storage guy I enjoy having my own disk.

 

I also find some vendors, if you want to get OUT of their storage would cost an insane amount. If it’s archive data, I tend to put some of it to tape and have that data in object storage as well. I use the object storage for restores as I can still extract app aware files without pulling in a whole VM. If I was to switch clouds, I own the tape and it would be free to stage it somewhere and upload somewhere else. 


Comment