Skip to main content
Solved

PostGres SQL Database getting very very big and slow


PHuber
Forum|alt.badge.img
  • Not a newbie anymore
  • 4 comments

Hello community,

i have a customer doing NAS Backups from NetApp Shares to a hardened disk repository and then writing those NAS Backups to Tape.

We are using a PostGres DB as the Configuration Database. Its running on a NVMe Device. 

We found out that the DB is growing continously over the last 4 month. Starting around 40GB it reached around 650 GB in size until now.

Opening the VEEAM Console to access the VEEAM Management server takes very long and other processes also affected. We did some root cause analysis and found out that the hardware of the server is not overprovisioned or constrained.

Are there any options to do a maintenance on the DB and/or restrict the DB in growing continously ? Or maybe doing some other tasks to adress those issues ?

 

Best answer by damien commenge

Michael Melter wrote:
damien commenge wrote:

 

I can confirm when you use file to tape job the DB increase a lot because each file you have on all tape are also présent in database…

However I m sorry but I don't find any recommendation about sql should be préfèred to postgre in this scénario...in your links I don t find it too... sql standard was recommended before postgre because it was the only way... 10gb can t handle a lot of file to tape so you were forced to migrate if you want to continue to backup your datas.…

The only way to reduce space support gave me was to remove from catalog some tapes and restart veeam service to get back the space.

It’s only the metadata that fills up the database. The files themselves go to tape, not the database. I guess, you meant that, @damien commenge.

There is no specific recommendation to prefer MS-SQL over PostgreSQL for file to tape. But it’s not only about database size hard limitations. For SQL-Express the main problem was most of the time the sluggish performance. You are restricted to 1 socket / 4 cores and 1410MB of buffer pool memory here. The latter being the most important restriction here with VBR when it comes to performance.

This is, what I tried to emphasize with the links provided.

 

There is a recommended limit for VM backups with PostgreSQL: With 12.0 ist was 5k VMs which was lifted to 10k VMs with 12.1.

Current number of VM limitations - R&D Forums (veeam.com)

It was always recommended not to have more then ~500VMs with SQL-Express while there was no VM limitation with the full blown SQL.

 

650GB is quite a size for a VBR database which would usually be expected for way more than 10k VMs - probably we’re beyond the recommended limit for PostgreSQL here. At least you would need to specifically size the database server here according to CPU, RAM and IOPS provided.

I would be interested what the actual number of files and versions is here.

Yes all the files names are in DB as known as metadatas :)

Then I think I probably miss understand your message here "Therefore, it was always recommended to use a fully licensed and well performing MS-SQL server for VBR, when file-to-tape comes into play."  because I believed you talked about sql is recommended when file to tape ! I totally agree with you about sql express limitation and can t be use on big company.

My dayabase was arround 250Gb and was pain full to perform the sql to postgreSQL migration but I can see the problem should be fixed now (change log mentioned improvement on scénario with big sql database to postgres).

 

View original
Did this topic help you find an answer to your question?

13 comments

coolsport00
Forum|alt.badge.img+20
  • Veeam Legend
  • 4109 comments
  • September 26, 2024

I haven’t heard of a DB growing like that @PHuber . Aside from a tuning suggestion given in the User Guide:

https://helpcenter.veeam.com/docs/backup/vsphere/postgresql_instance_configuration.html?ver=120

I’d also recommend getting with Veeam Support to see what other DB performance enhancements & modifications can be done to keep your DB more in check.

Best.


Forum|alt.badge.img+1
  • New Here
  • 5 comments
  • September 26, 2024

This seems quite odd. Following this. 


mpm19958
  • New Here
  • 5 comments
  • September 26, 2024

Could you have possibly configured your NAS cache repository to use the same repository as the Veeam Config DB? 


Michael Melter
Forum|alt.badge.img+12
  • Veeam Legend
  • 144 comments
  • September 28, 2024

To a certain extent, this is expected behavior: The reason is your file-to-tape job, secondary to your NAS job. Even with an efficient NAS job before it, it actually acts as file-to-tape AFAIK.

File-to-tape jobs generate a lot of metadata in the DB, as they put roughly the same amount of data into it per file as a VM job does per VM. Most people have way more files than VMs… 😉

Therefore, it was always recommended to use a fully licensed and well performing MS-SQL server for VBR, when file-to-tape comes into play.

https://bp.veeam.com/vbr/2_Design_Structures/D_Veeam_Components/D_VBR_DB/database.html

https://helpcenter.veeam.com/docs/backup/vsphere/file_to_tape_jobs.html?ver=120

https://forums.veeam.com/tape-f29/file-to-tape-limitation-t62821.html

With PostgreSQL for VBR still being fairly new, there are not much recommendations here.

How many files are we talking about in your case? It’s the number of files, that counts. I would assume >>1M in your case to rectify 650GB of DB size.

To my knowledge, there is no way to reduce the size of the DB with regards to tape data - except for getting rid of it in your tape backups again.

I would engage support to let Veeam know about your usecase and make sure, that everything is setup correctly.

Best, Michael.


damien commenge
Forum|alt.badge.img+5
Michael Melter wrote:

To a certain extent, this is expected behavior: The reason is your file-to-tape job, secondary to your NAS job. Even with an efficient NAS job before it, it actually acts as file-to-tape AFAIK.

File-to-tape jobs generate a lot of metadata in the DB, as they put roughly the same amount of data into it per file as a VM job does per VM. Most people have way more files than VMs… 😉

Therefore, it was always recommended to use a fully licensed and well performing MS-SQL server for VBR, when file-to-tape comes into play.

https://bp.veeam.com/vbr/2_Design_Structures/D_Veeam_Components/D_VBR_DB/database.html

https://helpcenter.veeam.com/docs/backup/vsphere/file_to_tape_jobs.html?ver=120

https://forums.veeam.com/tape-f29/file-to-tape-limitation-t62821.html

With PostgreSQL for VBR still being fairly new, there are not much recommendations here.

How many files are we talking about in your case? It’s the number of files, that counts. I would assume >>1M in your case to rectify 650GB of DB size.

To my knowledge, there is no way to reduce the size of the DB with regards to tape data - except for getting rid of it in your tape backups again.

I would engage support to let Veeam know about your usecase and make sure, that everything is setup correctly.

Best, Michael.

I can confirm when you use file to tape job the DB increase a lot because each file you have on all tape are also présent in database…

However I m sorry but I don't find any recommendation about sql should be préfèred to postgre in this scénario...in your links I don t find it too... sql standard was recommended before postgre because it was the only way... 10gb can t handle a lot of file to tape so you were forced to migrate if you want to continue to backup your datas.…

The only way to reduce space support gave me was to remove from catalog some tapes and restart veeam service to get back the space.


Michael Melter
Forum|alt.badge.img+12
  • Veeam Legend
  • 144 comments
  • September 29, 2024
damien commenge wrote:

 

I can confirm when you use file to tape job the DB increase a lot because each file you have on all tape are also présent in database…

However I m sorry but I don't find any recommendation about sql should be préfèred to postgre in this scénario...in your links I don t find it too... sql standard was recommended before postgre because it was the only way... 10gb can t handle a lot of file to tape so you were forced to migrate if you want to continue to backup your datas.…

The only way to reduce space support gave me was to remove from catalog some tapes and restart veeam service to get back the space.

It’s only the metadata that fills up the database. The files themselves go to tape, not the database. I guess, you meant that, @damien commenge.

There is no specific recommendation to prefer MS-SQL over PostgreSQL for file to tape. But it’s not only about database size hard limitations. For SQL-Express the main problem was most of the time the sluggish performance. You are restricted to 1 socket / 4 cores and 1410MB of buffer pool memory here. The latter being the most important restriction here with VBR when it comes to performance.

This is, what I tried to emphasize with the links provided.

 

There is a recommended limit for VM backups with PostgreSQL: With 12.0 ist was 5k VMs which was lifted to 10k VMs with 12.1.

Current number of VM limitations - R&D Forums (veeam.com)

It was always recommended not to have more then ~500VMs with SQL-Express while there was no VM limitation with the full blown SQL.

 

650GB is quite a size for a VBR database which would usually be expected for way more than 10k VMs - probably we’re beyond the recommended limit for PostgreSQL here. At least you would need to specifically size the database server here according to CPU, RAM and IOPS provided.

I would be interested what the actual number of files and versions is here.


damien commenge
Forum|alt.badge.img+5
  • Veeam Legend
  • 115 comments
  • Answer
  • September 29, 2024
Michael Melter wrote:
damien commenge wrote:

 

I can confirm when you use file to tape job the DB increase a lot because each file you have on all tape are also présent in database…

However I m sorry but I don't find any recommendation about sql should be préfèred to postgre in this scénario...in your links I don t find it too... sql standard was recommended before postgre because it was the only way... 10gb can t handle a lot of file to tape so you were forced to migrate if you want to continue to backup your datas.…

The only way to reduce space support gave me was to remove from catalog some tapes and restart veeam service to get back the space.

It’s only the metadata that fills up the database. The files themselves go to tape, not the database. I guess, you meant that, @damien commenge.

There is no specific recommendation to prefer MS-SQL over PostgreSQL for file to tape. But it’s not only about database size hard limitations. For SQL-Express the main problem was most of the time the sluggish performance. You are restricted to 1 socket / 4 cores and 1410MB of buffer pool memory here. The latter being the most important restriction here with VBR when it comes to performance.

This is, what I tried to emphasize with the links provided.

 

There is a recommended limit for VM backups with PostgreSQL: With 12.0 ist was 5k VMs which was lifted to 10k VMs with 12.1.

Current number of VM limitations - R&D Forums (veeam.com)

It was always recommended not to have more then ~500VMs with SQL-Express while there was no VM limitation with the full blown SQL.

 

650GB is quite a size for a VBR database which would usually be expected for way more than 10k VMs - probably we’re beyond the recommended limit for PostgreSQL here. At least you would need to specifically size the database server here according to CPU, RAM and IOPS provided.

I would be interested what the actual number of files and versions is here.

Yes all the files names are in DB as known as metadatas :)

Then I think I probably miss understand your message here "Therefore, it was always recommended to use a fully licensed and well performing MS-SQL server for VBR, when file-to-tape comes into play."  because I believed you talked about sql is recommended when file to tape ! I totally agree with you about sql express limitation and can t be use on big company.

My dayabase was arround 250Gb and was pain full to perform the sql to postgreSQL migration but I can see the problem should be fixed now (change log mentioned improvement on scénario with big sql database to postgres).

 


PHuber
Forum|alt.badge.img
  • Author
  • Not a newbie anymore
  • 4 comments
  • October 7, 2024
Michael Melter wrote:

To a certain extent, this is expected behavior: The reason is your file-to-tape job, secondary to your NAS job. Even with an efficient NAS job before it, it actually acts as file-to-tape AFAIK.

File-to-tape jobs generate a lot of metadata in the DB, as they put roughly the same amount of data into it per file as a VM job does per VM. Most people have way more files than VMs… 😉

Therefore, it was always recommended to use a fully licensed and well performing MS-SQL server for VBR, when file-to-tape comes into play.

https://bp.veeam.com/vbr/2_Design_Structures/D_Veeam_Components/D_VBR_DB/database.html

https://helpcenter.veeam.com/docs/backup/vsphere/file_to_tape_jobs.html?ver=120

https://forums.veeam.com/tape-f29/file-to-tape-limitation-t62821.html

With PostgreSQL for VBR still being fairly new, there are not much recommendations here.

How many files are we talking about in your case? It’s the number of files, that counts. I would assume >>1M in your case to rectify 650GB of DB size.

To my knowledge, there is no way to reduce the size of the DB with regards to tape data - except for getting rid of it in your tape backups again.

I would engage support to let Veeam know about your usecase and make sure, that everything is setup correctly.

Best, Michael.

Hi Michael, 
Thank you veery much for your thoughts and input here. I am aware that NAS to Tape Jobs are treated as “File to Tape Jobs”. But i still wonder why DB is growing that big… We will have a look into it and will check the limits and requirements again. best regards 


PHuber
Forum|alt.badge.img
  • Author
  • Not a newbie anymore
  • 4 comments
  • October 7, 2024
mpm19958 wrote:

Could you have possibly configured your NAS cache repository to use the same repository as the Veeam Config DB? 

Nope checked that already.


Michael Melter
Forum|alt.badge.img+12

@PHuber: Just out of curiosity, what is the total # of files in your NAS backups?


PHuber
Forum|alt.badge.img
  • Author
  • Not a newbie anymore
  • 4 comments
  • October 7, 2024

Hi

we started with around 1M. But customer added more shares recently so we need to check if that breaks the limits 😉. We need to analyse…


PHuber
Forum|alt.badge.img
  • Author
  • Not a newbie anymore
  • 4 comments
  • January 28, 2025

Hello,

just to follow up on this i want to share some insights on that specific problem.
We opended a case with veeam following a very long troubleshooting process. The case escalated to PM. Support analyzed the PSQL database and found out that billions of “orphaned objects” where located inside the DB. They supplied a script which ran a couple of days. Deleting all orphaned objects inside the PSQL DB. After that the console opened as expected and respond times where normal. ;). 

So the takeaways for me here are:

  1. Size the environment appropriatly following VEEAM sizing guidelines
  2. Always put PSQL DB on SSD/NVMe when using NAS backup or file to tape or both. Or when you have a big envioronment.
  3. Configure PSQL DB following VEEAM recommendations in the user guide.
  4. Dont backup big file shares all together.Seperate jobs and shares. NAS Jobs AND Tape jobs.
  5. Dont use a high number of retention points for file backup if you can avoid this.
  6. Observe DB usage and size.
     

What to improve:

  1. VEEAM should implement “Metadatafiles” for File Backup. (Or similar technology) Not put all metadata into the DB. In bigger environments that lets DB growing very fast. And it slows down the environment
  2. VEEAM should implement recommendations on how to setup the PSQL maintenance features.(f.e. vaccum)
  3. An option to restore file cache data or tape catalog data separate from a complete DB restore. f.e. restoring file to tape infos without the need to restore the whole environment.
  4. focussing more on optimizing unstructured data backup. And handling the data sprawl. :-)


Maybe some of you find this useful..

Regards,

 
  


coolsport00
Forum|alt.badge.img+20
  • Veeam Legend
  • 4109 comments
  • January 28, 2025

Thanks for the update ​@PHuber . I recommend going over to the Forums to post your product suggestions. Veeam PMs will see it and hopefully will get some of those added in a future release.

Best.


Comment