Skip to main content

Morning,

 

Just to copy this sentence directly from the latest “Word from Gostev”:

“So I do expect the on-prem object storage adoption to go through the roof after V12 comes out with its support for backing up directly to object storage.”

 

There’s been talk of it before, but it’s great to see this as near to a commitment from Veeam as possible, prior to seeing it in the final product. This will do wonders for efficiency and scalability, I’ll be interested to see how this changes the dynamic of ReFS/XFS volumes vs just deploying object storage.

 

V12 is going to change the game!

I see the opportunity for Agent Backups directly to the cloud 🙂 Or doing Backup Copy Jobs to the Object Storage, instead of using the offload procedure. 

But I don’t think, that primary onpremise storage (xfs or reFS) will go away in the next few years in my company :)

 


I see the opportunity for Agent Backups directly to the cloud 🙂 Or doing Backup Copy Jobs to the Object Storage, instead of using the offload procedure. 

But I don’t think, that primary onpremise storage (xfs or reFS) will go away in the next few years in my company :)

 

I definitely don’t see it going away, we should always aim for the 2 in the 3-2-1 and not just solely rely on object storage in my opinion, plus ReFS will always be a great pairing with a single Veeam in a box style deployment. But direct to object will immediately be my preference over anyone dare trying to put in a NAS with SMB into an environment!


il will change some infrastructure conception perspective, waiting for v12 :D


This is a feature we have been waiting for as it allows scaling easier than block storage. We will still use XFS for hardened repos but this will be a game changer.


Backup to object storage will be a great addition and more flexible than the current implementation with capacity tier. I'm sure many will be tempted to go for object storage only (similar to dedupe only), but as @MicoolPaul says, we should not forget about the 3-2-1 rule.


Backup direct to object storage will be a great addition to the possibilities.

As always no option will be the only solution. I will everytime be a mix of the possible options.


I like the idea of direct backup to object storage, but as a second option. As @Mildur, I think especially being used as a repository for a backup copy job. If don’t like the idea using cloud storage as primary storage : too slow upload-links, to many things that can go wrong : be aware that in that case the snapshots of your VM is too long open. I love implementing backup jobs to regular (REFS and XFS) primary storage and afterwards a copy job to something else so the source is not impacted anymore. Using direct object storage in a backup jobs gives you much more flexibility than now using a capacity tier. I think if you only want to implement this for your must critical VMs : now you would like to implement several SOBR, one with capacity thier, one without : a copy job to direct object storage is much more flexible.

If using on-premises object storage appliances then it’s something else : then we will have to wait what the pro’s and con’s are regarding regular storage.

Again as @Mildur  says : when using Veeam agents (especially in the public cloud) : then it would be a perfect solution 😉 : I think Veeam agent in Azure and backup direct to object storage on Wasabi with object-lock functionality :wink:


A think that is a good option for the SOHO companies.

Not everyone has a huge on-premi. repository to archive your backups.

 

With v12 it’s the client that will choose what is the primary/secondary repository.

With v11 the client don’ have this option.

 

 


Fine, finally we can get rid of SOBR capacity tier. This is still the feature that responsible for most problems here.


Agree with @Chris.Childerhose: for on-prem I still - at least at the moment - recommend hardened Repo. I currently have no idea what the price difference would be between a standard server for hardened repo and a comparable - performance and capacity - object storage appliance. 

When customers already have experience with object storages, this new feature can be a great one for these. I also agree that it will help smaller companies to backup directly to the cloud. 


I just hope that this new feature will be usefull for the Standard licences too.


@DOSYS that’s a good question that I don’t know the answer to, but it could go one of a few ways based on the current licensing: https://www.veeam.com/veeam_backup_one_feature_comparison_ds.pdf

Object storage is currently only utilised within a SOBR, which requires at least enterprise licensing, so that could be a tie in, especially as the line item is called “ Scale-out Backup Repository™️ (SOBR) with object storage support”. It’s certainly not impossible to assume it will require at least enterprise licensing as de-dupe appliances require enterprise, whilst storage snapshot integration requires enterprise plus.

 

We shall see!


How will this affect synthetic operations? Spaceless fulls etc? I don’t know enough about object storage to comment but as f don’t think it uses pointers in the same way. I could be wrong.


I believe we’ll find out more at VeeamON 😁


Fine, finally we can get rid of SOBR capacity tier. This is still the feature that responsible for most problems here.

Totally Agree, i hope it will raise some limitations of capacity tier


@BertrandFR one of the key limitations that appears to be going away is having a single object storage bucket for the capacity tier: Multiple S3 buckets in Capacity Tier in SOBR - Veeam R&D Forums


How will this affect synthetic operations? Spaceless fulls etc? I don’t know enough about object storage to comment but as f don’t think it uses pointers in the same way. I could be wrong.

When i did my VMCA, i discussed about it was the same operation like Xfs or REfs for capacity tier. I think it will be the same? I think we need more clarification about it in Veeam Doc. @Mildur  maybe Detective have the answer!


@BertrandFRone of the key limitations that appears to be going away is having a single object storage bucket for the capacity tier: Multiple S3 buckets in Capacity Tier in SOBR - Veeam R&D Forums

I don’t like the actual limitation about the retention, i wanted to have more retentions on object storage and 2 copies during the retention on performance tier.


@BertrandFRone of the key limitations that appears to be going away is having a single object storage bucket for the capacity tier: Multiple S3 buckets in Capacity Tier in SOBR - Veeam R&D Forums

I don’t like the actual limitation about the retention, i wanted to have more retentions on object storage and 2 copies during the retention on performance tier.

What do you mean by two copies? A copy on performance and a copy on object?


@BertrandFRone of the key limitations that appears to be going away is having a single object storage bucket for the capacity tier: Multiple S3 buckets in Capacity Tier in SOBR - Veeam R&D Forums

I don’t like the actual limitation about the retention, i wanted to have more retentions on object storage and 2 copies during the retention on performance tier.

What do you mean by two copies? A copy on performance and a copy on object?

SOBR is limited by retention on your jobs.

If i’m not wrong in SOBR for capacity tier, you can’t choose different retentions (move or copy).

I wanted to have this:

  • Retention during X Days on performance tier and capacity tier (object) (2 copies)
  • then more retentions on capacity tier during Y Days

@BertrandFRone of the key limitations that appears to be going away is having a single object storage bucket for the capacity tier: Multiple S3 buckets in Capacity Tier in SOBR - Veeam R&D Forums

I don’t like the actual limitation about the retention, i wanted to have more retentions on object storage and 2 copies during the retention on performance tier.

What do you mean by two copies? A copy on performance and a copy on object?

SOBR is limited by retention on your jobs.

If i’m not wrong in SOBR for capacity tier, you can’t choose different retentions (move or copy).

I wanted to have this:

  • Retention during X Days on performance tier and capacity tier (object) (2 copies)
  • then more retentions on capacity tier during Y Days

Hi Bertrand,

 

That should be possible by leveraging both the copy + move options.

You copy your backups immediately to object storage, then set your move retention to only keep on performance for X days, with your job retention specifying the overall retention for object storage.

The main downside is you can’t do per job retention policies for this as it’s set at SOBR level:

https://helpcenter.veeam.com/docs/backup/vsphere/new_capacity_tier.html?zoom_highlight=copy+move&ver=110


I believe we’ll find out more at VeeamON 😁

I believe we will 😎


Has anyone tried V12 direct backup to S3-compatible object storage yet ? How is the backup / restoration performance ? is it a lot slower than normal disk repository ?

What about deduplication factor ?


Has anyone tried V12 direct backup to S3-compatible object storage yet ? How is the backup / restoration performance ? is it a lot slower than normal disk repository ?

What about deduplication factor ?

Hi @Stella Lee! Recently, I participated in a proof of concept (POC) with an Object First appliance as an S3 Backup Repository. The performance was as promised by OF; that is, we achieved 1GB/s in transfer rate. Regarding compression/dedup performed by Veeam Proxies, numbers similar to field experience were obtained. 


Has anyone tried V12 direct backup to S3-compatible object storage yet ? How is the backup / restoration performance ? is it a lot slower than normal disk repository ?

What about deduplication factor ?

I use Wasabi in my homelab and go direct to it as well as using it in SOBR.  Performance is good and is all dependent on your environment and connection speed to the internet.  It is slightly slower than a normal disk repository but that is expected due to it being in the cloud and not local.

Dedupe seems on par with other backup types and S3 vendors.


Comment