Skip to main content

I am setting up VBO to back up 365 data to the cloud as we are starting to migrate all users to 365. I have read it’s best to separate repos instead of just having one repo for the whole organization.

Anyone have an example of best setup? So far I am starting out creating separate repos for OneDrive, Teams, Sharepoint, Groups and separate jobs for each one of those. Is this a good approach? I was also going to add one more repo for Users. Would that cover everything for now as we are currently still using exchange on prem? 

 

Should I also make one repo and job for the entire Org with all data in it? Or only have one repo with everything in it? Just want to make sure I set it up properly so I don’t realize I should have set it up differently a year down the road. Thanks!

Yes this is a good approach for sure.  We have the following repos with Linux Pools -

  • Onboarding Pool
  • Exchange Pool
  • OneDrive Pool
  • SharePoint/Teams Pool

Everyone starts out in the Onboarding pool then we move the repository over to the respective pool after the initial full is completed.  You can do this with PowerShell.  See my blog for details - https://just-virtualization.tech/2025/04/01/powershell-tips-for-veeam-vb365-v8-1/


Thanks for the info! I’ll look into using an Onboarding pool and check out your link!


Not a problem.  Always here to help.  I am working on very large projects now for VB365 so this comes out of some of my design. 😁

 
 
 

Hey,

 

How many users are you protecting?

Where is your compute running for VB365 and proxies? You mentioned in the cloud is where your data will reside, which cloud?

 

A lot of historic guidance around split repos and proxies comes from the v7 days where a repository had one proxy. So in some environments, splitting jobs via job type (mailbox/sites etc) and having each to a different proxy/repo was a great way to improve throughput. A lot of mid-sized organisations can run a backup job with just one proxy pool and repository and experience no notable performance impact.

 

Just making sure your architecture doesn’t end up overkill and overly expensive.

 

Stick to the essentials such as disabling EWS throttling, using a second app registration for the initial backup, and ensuring your proxies have enough compute, RAM, and bandwidth to saturate your available API calls and storage sufficiently.


Hi MicoolPaul,

We have 130 users, have VBO on a local server and are sending backups to Wasabi with eventual plans to add backup copy to local NAS and hopefully switch to Veeam Data Cloud when budget allows. Currently my first run of backups is only taking about 25 GB as only a few departments have started using OneDrive and Sharepoint, plus our Exchange is still on prem until next year. 

 

I set repos and jobs for OneDrive, Sharepoint, Sites, Groups, Teams, Users. I may remove some of those as I see there is crossover in data. I set the proxy to be our VBO server, as I read there should be very little storage used up by doing this.  So I am not using proxy pools at all. 

 

I have read about disabling EWS for when the time comes to migrate, but I will look into the second app registration. We will eventually also need to add and configure including Teams chats into our Teams backup.


Hi,

 

In this case, you’ve got a real risk of over engineering, and costing yourself more money than necessary.

 

My recommendation would be 1 pool. Even 1 proxy would be enough based on current data sizes easily!

There’s no harm in having 1x repo per job type. But note the strong overlap between SharePoint and Teams, so I’d suggest putting those types into the same repo.

 

Unless you’re going to need different retentions for individual groups or users, then a per job type gives you some flexibility to say 5 years for Exchange but 7 for OneDrive if you needed to pivot down the road.

 

So now you’ve got a local VB365 server, pulling data down  from the WAN (downloading), processing locally, then writing to Wasabi (uploading). What’s your WAN bandwidth? Are you going to have sufficient bandwidth in a DR to download all your backup data from Wasabi and upload to M365 in a timely manner?

 

You’re absolutely right about the EWS throttling disable and a 2nd app registration won’t be necessary yet but might be beneficial if you did a giant cut across migration such as changing folder redirection for all users to OneDrive simultaneously.

Your note about NAS is interesting and I want to explore it fully for you. I don’t know the vendor & model, but here’s what you should know: NAS via SMB is deprecated and going away. This is because when we treat storage as block like SMB does, we host a JetDB on there. Databases and SMB protocol are a bad mix and generate a lot of support tickets, so that is going away from the VB365 product. If you were to use the NAS as block storage, we don’t support it as a backup copy target, as backup copy is only permitted for object storage repository as both the source and destination. If your NAS does support being used as object storage please make sure it’s certified/spec’d up to the recommended requirements by the vendor for this workload type as we can really hit object storage hard. Finally, keep in mind the WAN data flow, you’re gonna be downloading data for your initial backups to upload to Wasabi, and then also downloading data from Wasabi to store locally. That’s potentially a lot of bandwidth to have to reserve in your pipe once you’ve got all M365 services being used in anger.

In summary, I’d be suggesting a setup like this:

-1x Proxy (likely on your public cloud of choice for bandwidth reasons but that’s dependent upon your comments of bandwidth usage & commitment)

3x Repositories: Exchange, OneDrive, and SharePoint&Teams

3x Backup Jobs: First is for all mailboxes and archives mailboxes, second is for all OneDrives, third is for all Sites and Teams.


Awesome thank you for all the info. We will have to look into another solution for backup copy jobs. Currently we do this for Veeam BR server backups but it looks like we will need to use a cloud destination for VBO. Currently our network is about 840 down and 660 up. I will also plan to set the backups to happen after hours when very few people will be doing any work.

What are your thoughts on enabling versioning and object lock on the Wasabi buckets? Is versioning not needed since I am taking snapshot backups? It seems like that’s a quick way to fill up your storage quota. Object lock seems like a smart idea, but it requires versioning to be enabled so I’m not sure what is the best option. I’m still pretty new to all this 365 stuff, as well as VBO and Wasabi.


Awesome thank you for all the info. We will have to look into another solution for backup copy jobs. Currently we do this for Veeam BR server backups but it looks like we will need to use a cloud destination for VBO. Currently our network is about 840 down and 660 up. I will also plan to set the backups to happen after hours when very few people will be doing any work.

What are your thoughts on enabling versioning and object lock on the Wasabi buckets? Is versioning not needed since I am taking snapshot backups? It seems like that’s a quick way to fill up your storage quota. Object lock seems like a smart idea, but it requires versioning to be enabled so I’m not sure what is the best option. I’m still pretty new to all this 365 stuff, as well as VBO and Wasabi.

Object Lock (Immutability) is always a good idea and you will need to set that in the jobs when you create them.  Ensure you are using v8.


Hi ​@Flinktron dont worry about versioning messing with space consumption.

 

Snapshot-based retention purely means that if your retention is 5 years, and the item is in your mailbox at the latest backup, we’ll keep it from 5 years at that point. If it’s still there tomorrow, it’s 5 years from that point. Because with VB365 we are “forever forward incremental” so unless the file changes, only 1 copy of it is ever stored.

 

So I wouldn’t stress about a much higher amount of storage consumption. It will be higher in nearly all scenarios than item level, but only because item level cares about the age of the email itself, and will only keep it for 5 years from the date it was received, which you might not want that to be the case.


@Flinktron  - I wanted to follow up and see if you were able to get this issue resolved so we can mark an answer even if it is your details.  Let us know.


Thank you Chris. Yes, I ended up creating 4 repos and 4 jobs. Sharepoint, OneDrive, Teams, Groups. I will later add Exchange to the groups job once we start doing that. Still looking into possibly creating a backup copy job to go to a different cloud storage provider. Thanks for all your help everyone!


Comment