Skip to main content

I’m using Veeam Backup for Office 365.

Our data needs to be retained forever.

My problem is that we have limited fast storage options (NAS that can read/write at high speeds). We have virtually unlimited slower storage options.

I am currently using the fast storage for the repository. It is filling up fast, and in less than a year it will be full.

I don’t need 10 years of emails and such to be quickly restored. I would like to move anything over 6 months old to a repository on a different, slower NAS.

I’m trying to understand offloading to object storage repositories, but not sure this is the correct solution. Any direction would be greatly appreciated.

There are some Migration Commands (https://helpcenter.veeam.com/docs/vbo365/powershell/move-vboentitydata.html?ver=50), but your adb files (jet databases, which contains the backup data) will not get smaller. Veeam Support can help you here eventually to make it smaller, but I‘m not entirely sure about that.

Object Storage would be a solution to go on.

onPremise or Cloud.

 

It‘s the future of Veeam Backup Storage in my opinion.

Also the compression will be much better:

 

legacy Storage = ~10% compression

object Storage = ~50% compression

 

Migrate to Object Storage:

 

https://www.veeam.com/kb3067

 

 


There are some Migration Commands, but your adb files (jet databases, which contains the backup data) will not get smaller. Veeam Support can help you here eventually to make it smaller, but I‘m not entirely sure about that.

Object Storage would be a solution to go on.

onPremise or Cloud.

 

It‘s the future of Veeam Backup Storage in my opinion.

Also the compression will be much better:

 

legacy Storage = ~10% compression

object Storage = ~50% compression

I would agree with Mildur here if you can do OBS then go that route.  Since O365 databases are based on JET technology like Exchange the only way to reduce space is take the services offline and run the ESEUTIL commands on the EDB file.

You might also want to check your retention settings of you repository as well.  Maybe you create a second repository with a different retention and another backup job to send data there as well but for a longer period and then your fast storage has a shorter retention.


Hi, the best way is to use object storage because of the 50% compression (and if you public object storage : unlimited). Just create a new repo and active the check-box offload to object storage and create an object storage repository. The first regular repo is being used as a cache to transport the data to the object storage. To move the existing data from the existing repository, just perform the PS-command like @Mildur described.


Thank you all for the advice. I will give it a shot.


Comment