Skip to main content

Veeam Backup for Microsoft 365 v8: Architecture, Setup, and the Explorers

  • March 26, 2026
  • 7 comments
  • 109 views

eblack
Forum|alt.badge.img+2

 

Microsoft’s position on your Microsoft 365 data has not really changed: they keep the service running, but they do not give you backup in the way most admins mean it.

Yes, deleted data may hang around for a while. Exchange has retention behavior. SharePoint and OneDrive have versioning. Teams data can still exist in awkward places even after it disappears from the user view. None of that gives you a real backup timeline, a clean point-in-time restore, or a recovery workflow you would want to bet an outage on.

That is the gap Veeam Backup for Microsoft 365 fills, and v8 is not just another incremental release. The architecture changed enough that you need to think about deployment differently than you did in v7.

 

Start with the big change: v8 is built around PostgreSQL

 

If you are coming from v7, the first thing to understand is that SQLite is gone from the design that mattered before.

Older builds used SQLite for the configuration database and for per-proxy cache data. In v8, that moves into PostgreSQL. That is not a cosmetic backend change. It is what makes shared cache possible, and shared cache is what makes proxy pools possible. In other words, the proxy story changed because the database story changed first.

That matters in real deployments. In earlier versions, each proxy carried its own persistent repository cache. In v8, multiple proxies can work against the same object storage repository because the shared cache lives centrally in PostgreSQL.

If you are planning scale, that is the first architectural shift to respect.

 

Proxy pools are the second big shift

 

In v7, multiple proxies were just multiple proxies. In v8, they can be grouped into a pool and treated as one logical resource for backup, restore, and backup copy activity.

That gives you load distribution and a cleaner way to spread Microsoft API pressure across several systems. It also makes maintenance easier, because you can take one proxy out without wrecking the whole schedule.

There is a catch, and it is an important one: proxy pools only work with object storage repositories. If you are still thinking in local Jet-based repository terms, that part of the design does not carry forward cleanly.

 

Linux proxies and NATS are part of the new normal

 

Linux proxy support is one of the nicer additions in v8, especially if you care about cost and scale. Windows is still there, but now you can deploy proxies on supported RHEL or Ubuntu systems, and newer v8 builds extend that support further.

NATS is the other new moving part people need to notice before installation day. It handles communication between proxies in a pool. For a small deployment, you can keep it simple and run it locally with the rest of the VBO365 stack. Once the environment starts getting bigger, that design gets less attractive and a separate NATS host makes more sense.

One deployment, one NATS instance. Do not try to get clever and share it across separate VBO365 deployments. That is not supported.

 

Immutability, MFA, and RBAC make v8 more defensible

 

This release also closes some security and governance gaps that were easier to hand-wave in smaller environments.

Primary object storage repositories can now be immutable on supported platforms. MFA is in the console and the Explorers. RBAC is no longer an afterthought. If you are running this for an MSP use case, or for a larger internal environment where not everyone should have the same access, those are meaningful upgrades rather than brochure features.

The same goes for Teams coverage. Private and shared channels are not some side case anymore. v8 shipped with real coverage there, which matters if your org actually uses Teams the way Microsoft keeps telling people to.

 

The architecture is not hard, but it has more parts than v7

 

If you drew this on a whiteboard, the layout is pretty straightforward.

The VBO365 server is the control plane. That is where the console and scheduling live. PostgreSQL now carries much more weight, because it handles configuration, org cache data, and repository persistent cache. NATS is in the picture if you use proxy pools. Proxies do the actual work. Repositories hold the data. Object storage is the design center now, especially if you want pools and immutability.

Where people get into trouble is not understanding that PostgreSQL has become shared infrastructure instead of just ‘the database over there.’

That is also why the VBR coexistence warning matters. If Backup and Replication is already using its own local PostgreSQL instance on the same machine, do not assume VBO365 can just pile onto that cleanly. VBR configures PostgreSQL for local access by default. VBO365 needs remote proxy access. Those are not the same assumptions, and if you ignore that, the install will remind you.

 

System requirements are not the interesting part. The operational requirements are.

 

You can read the support matrix. What matters in practice is this:

PostgreSQL needs to be sized like a real dependency, not treated like a checkbox. It has to accept remote connections from proxies. It needs the right extensions. It needs UTF-8. It needs storage that is not miserable. If the environment is going to be serious, give PostgreSQL SSD-backed resources and stop pretending the default smallest-possible footprint is a production design.

The same goes for NATS. Small lab? Local is fine. Real scale? Separate it.

Modern authentication is also not optional anymore. If somebody is still mentally carrying forward old authentication habits from much older builds, fix that assumption before deployment starts.

 

Deployment order is easy enough if you prepare the database first

 

The installer itself is not the hard part.

The work starts before it.

Prepare PostgreSQL first. That might mean standing up a new instance or letting Veeam install one locally for a small deployment. If the environment is large enough, or proxy count is going high, plan that database like you mean it. In the bigger environments, PgBouncer also enters the conversation for the same reason it always does: connection handling at scale.

After that, run the installer on the designated VBO365 server, point it at PostgreSQL, decide where NATS belongs, finish setup, and only then start adding the Microsoft 365 organization and infrastructure components.

That sequence is not glamorous, but it avoids a lot of rework.

 

Adding the Microsoft 365 organization is now mostly about app registration discipline

 

At this point the question is not whether to use modern auth. It is whether the Entra application was set up correctly for the workloads you actually want to protect.

Exchange, SharePoint, OneDrive, and Teams do not all ask for the same permissions. That is the part people rush through and then come back to later when enumeration or access does not behave the way they expected.

Once the tenant, application ID, and certificate or secret are in place, Veeam can validate the connection and enumerate the organization.

 

Proxies and repositories: this is where the shape of the deployment becomes real

 

The default local proxy is enough to get moving, but it is not where a larger environment should stop.

As the workload grows, add remote proxies. If you need pooled behavior, group them. Then build the repositories around the retention and scaling model you actually want, not around what was easiest to click through on day one.

One thing that trips people up fast: retention is configured on the repository, not on the backup job. If two very different user groups need different retention, that is not a job-setting problem. That is a repository-design problem. Separate repositories are the answer there.

 

Backup jobs are straightforward until you forget what they are really protecting

 

The job model is flexible enough that you can protect the whole organization or narrow it down to specific users, groups, teams, or sites.

That part is easy.

The important piece is remembering that different workloads mean different objects. For users, you may care about mail, archive mailbox, OneDrive, calendar, and contacts. For sites, it is SharePoint content. For Teams, it includes standard, private, and shared channels.

Then pick the proxy or pool, pick the repository, and schedule the job according to the RPO you can justify. The platform supports very aggressive intervals, but that does not automatically make them sensible everywhere.

 

The Explorers are where the product has to prove itself

 

This is the part I care about most, because this is where people stop talking about architecture and start trying to get data back.

Exchange Explorer is the one most admins will recognize fastest. Mail, folders, contacts, calendar items, archive mailboxes, eDiscovery-style search, original restore, alternate restore, local export. It is the most familiar recovery story in the set.

SharePoint Explorer is more about hierarchy and granularity. Sites, libraries, lists, documents, versions, metadata. The notable limitation here is site pages. Not every page type is actually restorable the way people assume. If the page content is not stored in the SharePoint content database in a restorable way, Veeam cannot invent a restore path for it.

OneDrive Explorer is exactly the one you end up in when a user says they overwrote something important, deleted a folder tree, or got hit by ransomware and now wants the clean version from last week. The timeline-based browsing is what makes it useful.

Teams Explorer is the one that needs expectations set early. Files are one thing. Message history is another. Teams data sprawls into SharePoint, OneDrive, and Exchange, and the Explorer does a good job of abstracting that complexity. What it cannot do is push old Teams message history back into the live channel timeline, because Microsoft does not provide the API to do that. That is not a Veeam miss. That is a platform wall. If stakeholders do not hear that until the first restore request, the conversation gets unnecessarily ugly.

 

Restore Portal is useful because it keeps IT out of the easy recoveries

 

This is one of the more practical features in the product.

If users or delegated restore operators can recover their own mailbox items, OneDrive files, or SharePoint content through the portal, that cuts down the low-value restore tickets without handing out full admin rights.

That is exactly where self-service should sit: useful enough to reduce noise, limited enough that it does not become accidental chaos.

 

The common failure modes are not random

 

The most common first deployment mistake is still PostgreSQL overlap with VBR. Shared instance assumptions bite people fast.

After that, proxy-version mismatch after upgrading the VBO365 server is another easy way to create avoidable trouble. If the server moves and proxies do not, expect connection or processing issues.

Default PostgreSQL settings are another trap. Minimal resource settings are fine for getting software installed. They are not the same thing as a production tuning stance.

There are also a few rules that are just architectural facts:

  • one NATS server per deployment
  • proxy pools require object storage repositories
  • Teams messages do not restore back into Teams
  • legacy authentication is gone
  • v7 upgrade means migrating SQLite-backed data into PostgreSQL
  • archive mailbox handling has had build-specific issues, so check your exact build notes instead of assuming every v8 release behaves the same way

 

Final thoughts

 

The easiest way to misunderstand Veeam Backup for Microsoft 365 v8 is to think of it as “v7, but newer.”

It is not.

The move to PostgreSQL, shared cache, proxy pools, Linux proxies, NATS, and immutable object storage changes how you should design it from the start. If you treat it like a small-point upgrade, you will miss the parts that actually affect scale, recovery behavior, and operational sanity.

If I were deploying it fresh, I would get four things right before anything else: a proper PostgreSQL plan, a clean object-storage and retention design, realistic expectations around Teams recovery, and a proxy layout that fits the size of the environment instead of today’s smallest install. Once those are in place, the rest of the product starts making a lot more sense.

7 comments

Chris.Childerhose
Forum|alt.badge.img+21

I will say VB365 v8 has been much better than previous versions for sure.  A lot more setup, but it makes things more efficient, especially the Proxy Pools now with Linux servers.  We have also found configuring PGBouncer makes Postgres very efficient as well.


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 26, 2026

I will say VB365 v8 has been much better than previous versions for sure.  A lot more setup, but it makes things more efficient, especially the Proxy Pools now with Linux servers.  We have also found configuring PGBouncer makes Postgres very efficient as well.

I still remember the early versions and purging mailboxes with PS to shed licensing. :) It has come a very long way. 


kciolek
Forum|alt.badge.img+4
  • Influencer
  • March 26, 2026

Great article! Lately, all of my customers are interested in the VDC for M365. I haven’t done a demo of the traditional Veeam for M365 in quite some time. 


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 26, 2026

Great article! Lately, all of my customers are interested in the VDC for M365. I haven’t done a demo of the traditional Veeam for M365 in quite some time. 

Great point, I’m glad we have options. 


coolsport00
Forum|alt.badge.img+22
  • Veeam Legend
  • March 27, 2026

Nice detailed M365 post Eric! Thanks for sharing 👍🏻


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 27, 2026

Nice detailed M365 post Eric! Thanks for sharing 👍🏻

Thanks!


Iams3le
Forum|alt.badge.img+13
  • March 31, 2026

Excellent and well written piece!