Skip to main content

Veeam v13: VBR PostgreSQL Operational Tuning

  • March 24, 2026
  • 9 comments
  • 153 views

eblack
Forum|alt.badge.img+2

 

VBR v13 defaults to PostgreSQL for new installs, so for a lot of environments the question is no longer “Should we care about PostgreSQL?” It is “When do we move, and what do we need to fix after we do?”

If you are still running SQL Express, the answer is usually sooner than later. If you are running a licensed SQL Server only because VBR needs it, there is a good chance you are carrying cost and patching overhead you no longer need. And if you already migrated but the console feels slower than it used to, that is usually not because PostgreSQL is a bad fit. It is because the default PostgreSQL settings are conservative and generic, while VBR is not.

This is the part people miss. The migration itself is usually the easy piece. The performance conversation starts after the restore, not before it.

 

First question: do you actually need to migrate now?

 

Not every environment needs to sprint into this.

If you are on SQL Express and comfortably below the 10 GB limit, there is no emergency. It still works. You can move when it makes operational sense.

If you are close to that 10 GB ceiling, I would stop treating it as optional. SQL Express hits the cap and stops accepting writes, which is not the kind of surprise you want from a backup platform.

If you have a fully licensed SQL Server sitting there mainly to support VBR, the migration case is easier to justify. You can reclaim the SQL licensing cost and cut down the care-and-feeding that comes with it.

If you are planning to move VBR to the Linux Software Appliance, then this stops being a debate. PostgreSQL is the only path there.

The one place I would slow down is an MSP-style environment with higher tenant counts and tape-heavy workflows. PostgreSQL can handle it, but that is where tuning stops being a recommendation and starts becoming part of the design. The same caution applies if you currently rely on SQL Server high-availability patterns like Always On or FCI for the VBR database. PostgreSQL is not giving you a direct equivalent there, so I would not pretend that part is solved.

 

The migration is not a live conversion

 

This is a backup-and-restore process, not some magical in-place engine swap.

You install PostgreSQL, take a configuration backup from VBR, then restore that configuration in migration mode and point it at the PostgreSQL instance. Jobs, schedules, infrastructure settings, and credentials come across. The platform is still VBR. What changes is the database engine behind it.

The practical sequence is straightforward:

  • Install PostgreSQL first. Veeam v13 supports PostgreSQL 14 and later, and the ISO includes PostgreSQL 15.
  • Take a configuration backup from the VBR console and encrypt it. Do not skip the encryption part unless you are comfortable with credentials being stored in clear text.
  • Run the configuration restore in migration mode and select the PostgreSQL target.
  • Then do the step too many guides barely mention: tune PostgreSQL for VBR.
  • After that, verify the console is healthy and only then re-enable your jobs.

One other point is worth calling out because it causes a lot of false “PostgreSQL is slow” complaints: exclude the PostgreSQL data and binary directories from antivirus real-time scanning. That is not optional hygiene. It is one of the first things I would check in any post-migration performance complaint.

 

The tuning command matters more than most people think

 

Set-VBRPSQLDatabaseServerLimits is one of those commands that sounds like a nice extra until you compare a default PostgreSQL build against one sized for the hardware it is actually running on.

Out of the box, PostgreSQL uses conservative values because it has no idea what your workload really looks like. VBR is not a casual database workload. It reads hard during reporting, writes hard during active jobs, and can hit bursts of concurrent activity that the stock settings are not sized for.

That is why the tuning command matters. It looks at the actual server resources and generates recommended values you can apply to PostgreSQL. Typical areas it influences are shared_buffers, effective_cache_size, maintenance_work_mem, max_connections, work_mem, and wal_buffers.

A practical example looks like this:

PowerShell example

Connect-VBRServer -Server "localhost"

 

Set-VBRPSQLDatabaseServerLimits -DumpToFile "C:\temp\pg-recommended.txt"

 

Disconnect-VBRServer

 

Vacuum is not a problem until it is

 

PostgreSQL uses MVCC, which means updated and deleted rows do not disappear immediately. Old row versions stay behind until vacuum cleans them up. In a database that sees frequent updates, that is normal. VBR absolutely qualifies.

In most environments, autovacuum does its job quietly in the background and you never need to think much about it.

The places where I would start paying attention are not subtle. One is right after migration from a heavily used SQL Express system, especially if the database already had some age and baggage. Another is any environment doing file-to-tape work at real scale, where one job can generate a huge amount of metadata quickly. The third is when the VBR console starts feeling slow in job-history views and you need to confirm whether dead-row buildup is part of the story.

A quick way to look for that is:

SQL example

SELECT relname AS table_name,

       n_live_tup,

       n_dead_tup,

       ROUND(n_dead_tup::numeric / NULLIF(n_live_tup + n_dead_tup, 0) * 100, 1) AS dead_pct,

       last_autovacuum

FROM pg_stat_user_tables

WHERE n_dead_tup > 10000

ORDER BY n_dead_tup DESC

LIMIT 20;

And if you find a table that is obviously carrying dead weight:

VACUUM ANALYZE [table_name];

 

The monitoring queries are simple, but they tell you a lot

 

You do not need a giant observability platform just to answer the first two questions that matter.

The first is: what is active in the database right now?

The second is: how close am I to the connection ceiling?

Those two checks alone explain a lot of “VBR feels weird today” situations.

SQL example

SELECT pid,

       usename,

       state,

       query_start,

       EXTRACT(EPOCH FROM (now() - query_start)) AS seconds,

       LEFT(query, 120) AS query_preview

FROM pg_stat_activity

WHERE datname = 'VeeamBackup'

  AND state != 'idle'

ORDER BY query_start;

And for connection pressure:

SELECT COUNT(*) AS current,

       (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max,

       ROUND(

         COUNT(*)::numeric /

         (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') * 100,

         1

       ) AS pct

FROM pg_stat_activity

WHERE datname = 'VeeamBackup';

Those are not fancy, but they are useful. When the system is under strain, they usually point you in the right direction quickly.

 

What changes once you get past 50 tenants

 

This is where the article gets more interesting, because PostgreSQL itself is usually not the problem. Scale discipline is.

The first limit that shows up is often connection saturation. VBR opens database connections for backup jobs, restore sessions, and the rest of the usual management activity. In a busy MSP environment, especially one with a lot of parallel tenant operations, the default max_connections value can get consumed faster than people expect.

When that happens, the failure pattern is annoying because it does not always look like “database tuning problem.” It looks like random VBR issues, job failures, or infrastructure weirdness. But underneath it, you are just out of headroom.

The first response is to set max_connections based on the output of Set-VBRPSQLDatabaseServerLimits, not whatever default PostgreSQL started with. The second is to think seriously about PgBouncer once concurrency gets high enough that connection pooling makes operational sense. PgBouncer is not mandatory everywhere, but past a certain point it is a much cleaner answer than pretending the raw connection count will stay manageable forever.

The second scale problem is tape metadata growth.

File-to-tape jobs are not subtle. One metadata row per file means a NAS job with millions of files can explode table growth faster than autovacuum’s default rhythm can keep up with. That is where you stop relying purely on background cleanup and start planning for more aggressive autovacuum behavior, scheduled VACUUM ANALYZE during quieter windows, and saner tape-catalog retention settings.

That is not PostgreSQL failing. That is the workload asking more of it than the default profile was built to deliver.

Final thoughts

 

The biggest mistake in this whole conversation is assuming the migration is the hard part.

Usually it is not.

The harder part is deciding whether your current SQL deployment is still worth keeping, then making sure PostgreSQL is actually configured for the workload you are moving onto it. If you migrate and leave PostgreSQL at its most generic defaults, you are not really testing PostgreSQL against SQL Server. You are testing tuned SQL Server against untuned PostgreSQL, which is not a fair comparison and usually not a useful one.

If I were approaching this in production, I would do four things first: make the migration call based on the actual environment rather than habit, exclude PostgreSQL from AV scanning before blaming performance, run Set-VBRPSQLDatabaseServerLimits and apply the results, and keep an eye on dead rows and connection pressure once the system is live.

That is the point where PostgreSQL starts behaving like a proper VBR backend instead of a generic database that happened to get installed.

 

9 comments

Chris.Childerhose
Forum|alt.badge.img+21

Much prefer Postgres for Veeam and we are working hard to migrate to this from SQL.


kciolek
Forum|alt.badge.img+4
  • Influencer
  • March 24, 2026

great article! I’m definitely a fan of Postgres ..I migrated one of the lab servers recently. 


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 24, 2026

Much prefer Postgres for Veeam and we are working hard to migrate to this from SQL.

I feel like it handles tables better. I can’t recall the last time I’ve had to hunt stale replica tables since converting. 


Chris.Childerhose
Forum|alt.badge.img+21

Much prefer Postgres for Veeam and we are working hard to migrate to this from SQL.

I feel like it handles tables better. I can’t recall the last time I’ve had to hunt stale replica tables since converting. 

Yeah, just takes a little getting used to the query commands from SQL.  😋

 
 
 

Stabz
Forum|alt.badge.img+9
  • Veeam Legend
  • March 24, 2026

I m waiting postgresql for Veeam One , VSPC and Veeam Recovery Orchestrator.

 


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 24, 2026

I m waiting postgresql for Veeam One , VSPC and Veeam Recovery Orchestrator.

 

Aren’t we all! :)


Chris.Childerhose
Forum|alt.badge.img+21

I was going to say the same thing for those apps.  It will be nice once they convert them to PG from SQL. 😂


wolff.mateus
Forum|alt.badge.img+12
  • Veeam Vanguard
  • March 24, 2026

Nice content! I do not have in mind some points that you bring here ​@eblack.

Thanks for share.


eblack
Forum|alt.badge.img+2
  • Author
  • Influencer
  • March 24, 2026

Nice content! I do not have in mind some points that you bring here ​@eblack.

Thanks for share.

Glad to hear, thanks!