Veeam Backup for Microsoft 365: Immutability & Backup Copies


Userlevel 7
Badge +20

Hey everyone,

Today I want to talk about not one, but two new features in Veeam Backup for Microsoft 365 (VB365) v7, these are: Immutability, and Backup Copies.

Not only do I want to talk about these two features but, most importantly, how they are related to each other.

 

New VB365 Feature: Backup Copies to ANY object storage

 

VB365 v6 brought a strongly requested feature to the VB365 product, backup copies. As part of Veeam’s 3-2-1-1-0 backup best practices, we should have a minimum of three copies of data. Historically, we would need to achieve these three copies of data by either taking multiple full backups, or backing up the VB365 server after it had successfully taken a backup. As I’ve not discussed these previously, lets highlight some key drawbacks of these approaches:

 

Backup Copy Methods Pre-V6

 

Multiple Full Backups

The main drawback to taking multiple full backups, is the Microsoft enforced throttling policies that apply when VB365 is fetching content from the Microsoft 365 tenant. Microsoft apply per-application and per-tenant levels of throttling, and creating multiple backups can generate throttling events, especially during the initial backups.

Microsoft attempts to limit the wider impact of throttling by utilising the per-application throttles, preventing one application impacting all users. However, this scenario changes when we start to utilise multiple applications, such as for the multiple backups, whereby it’s quite possible we could hit tenant-level throttling and impact the users working with the Microsoft 365 tenant, creating a bad user experience.

 

Backing up the VB365 Server

Whilst it’s certainly possible to protect the VB365 server, this scenario creates certain restrictions. Namely, object storage and backing up the VB365 server and/or repository server(s) is incompatible. This is because only a cache is kept locally on the VB365 repository server when object storage is used. This cache only contains the metadata necessary to reduce API calls from the object storage.

By forcing yourself to use a local backup repository instead of object storage brings a second downside that isn’t talked about enough: the increase in backup sizes. Local Repository backups typically provide lower data compression rates of up to 10%, whilst object storage provides higher data compression rates of up to 55% (Source: Veeam Blog – Storage Decision Guide). Meaning lower storage efficiency, and the need to increase storage provisioning sooner vs object storage.

 

Object Storage Replication

Whilst I didn’t mention this above, I just want to address another ‘method’ of providing backup copies, and my opinion on this. It is supported to use native object storage replication where available with your VB365 object storage repositories. But I personally disqualify this as a backup copy for the key reason that this is an asynchronous replication, instead of a separate copy. As the writes to the replication target are asynchronous, and commonly without Service Level Agreements, we can’t guarantee when this backup copy is usable from, creating questionable Recovery Point Objectives, whilst deletion on the source will also replicate to a delete in the destination. This final point is potentially countered with blob versioning, but this invites limitations such as Veeam only supporting versioning on Azure when immutability is enabled. In short, it’s a path of heightened risk that the backup is not usable for recovery.

 

Backup Copy Methods in V6

This brings us to Veeam’s last release of VB365, v6. In VB365 v6, we saw the introduction of Backup Copies. This feature is for object storage, finally providing us with a way to truly achieve the 3 in our 3-2-1-1-0 requirements in this scenario. Some restrictions applied though. We were limited to using Azure or AWS, and only their ‘Archive Tier’ storage offerings, namely: Azure Archive, AWS S3 Glacier, and AWS Glacier Deep Archive.

This was a nice step in the right direction, we were of course going to be primarily using our backup to restore from, with the backup copy being our safety net, and archive tiers provided cheaper storage capacity. There were unfortunately some restrictions this feature didn’t yet address. Object storage is a concept, not specifically tied to the hyperscaler giants of AWS & Microsoft Azure, it’s possible to have a primary backup on object storage locally, then wish to backup copy to a cloud provider, or just simply, wish to still use readily accessible, higher performance tiers of object storage instead of the archive tiers available from Microsoft Azure & AWS.

 

Now: Backup Copies in V7

This is where things get exciting, backup copies can go between any supported object storage platform or solution in V7. So, whether you’re planning on using an object storage solution on-premises such as from Cloudian, Object First, software-defined options such as MinIO, or any S3-compatible provider, this can be used as both your primary backup, or a backup copy target.

 

Where does Immutability fit into this?

 

VB365 is bringing immutability support exclusively to backup copies, at least for the v7 release. This is an important design decision when factoring in where your primary and secondary backup copies should reside. At the time of writing, the Technology Partners section of the Veeam website hasn’t been updated to reflect the products explicitly supported by VB365 with immutability, for a good indication though, have a look at the products that support Veeam Backup & Replication and ‘Object – with Immutability’ until the list is updated, and always confirm with the vendor before making any financial commitments!

Now, why do we want immutability? If you’ve missed out on this latest industry buzzword, it’s a simple concept. Immutability means we can read from, and write to our backup repository, but we can’t modify the existing data or delete it, until its ‘object lock’ expires, which is a setting we define during configuration within VB365. The only way to tamper with immutable data is to have administrative ownership over the system or service that is providing the immutable data. When considering a traditional system, this would be root access. Or when we consider a service such as Microsoft Azure, this would be Microsoft themselves performing the data purge, which I’ve only seen happen after a customer fails to pay their bill for long enough that Microsoft terminates and deletes the customer’s Azure resources.

 

Immutability & Backup Copy Design Considerations

 

To bring this topic to a close, I want to just reiterate some of the key points that need to be considered when designing a VB365 backup solution and incorporating Immutability and Backup Copies.

 

API Calls, Bandwidth, and Billing

Most users of object storage are familiar that object storage services are primarily billed on API calls, storage consumed, and bandwidth, or some combination of these three. As a result, there is an expectation that you will be billed for at least one of these three items by your destination, when performing a backup copy to an object storage service.

However, it is certainly possible to incur additional costs if you’re using an object storage service as your primary backup provider as well. If your primary backup provider charges for ‘read’ API calls, or charges for egress bandwidth, you’ll encounter these charges due to fetching the backup objects from this service.

 

Efficient Traffic Flow

When performing a backup, or a backup copy, the proxy server needs to fetch the data being protected from its source; requiring download, or ‘receive’ bandwidth, whilst to write the data to its destination; upload, or ‘transit’ bandwidth is required.

As object storage does not provide compute, your backup proxy still needs to fetch from the source and write to the target. With backup copies now available between any supported object storage solutions/services, you might find that where a datacentre is bandwidth constrained for example, you would benefit from having a hosted proxy where bandwidth is plentiful, interacting with Microsoft 365 and a hosted object storage solution, and then providing a backup copy to a bandwidth constrained datacentre as the final copy.

 

Prioritise Immutable Object Storage as your Backup Copy

As mentioned previously, immutability is only supported for backup copy jobs, so if you have one object storage solution that supports immutability, and the other doesn’t, it makes sense in most scenarios to use the immutability supporting solution as your backup copy, and the other non-immutability compatible service as your primary backup.


3 comments

Userlevel 7
Badge +20

Really great article Michael as I read this on your blog.

Userlevel 7
Badge +6

Great information here...I haven’t done anything with these types of workloads so it’s great to see this information.  And I agree….object replication to me is not a second copy.  My reasoning is along the lines that if there is corrupted or bad data in the bucket, it’s going to be in replica as well.  But you bring a good point about not really having any control over the replicated data and ensuring that the replicas are replicated properly and promptly.  I’d rather Veeam control that.

Userlevel 7
Badge +6

Amazing blog! Thank you @MicoolPaul 

Comment