Skip to main content

Hello, for our applications we use PGO postgres cluster. In case that kubernetes cluster contain more than 1 postgres cluster is not possible create backup. PGO cluster are in difference namespaces. Backup process is failed with this error.

"pgoDBStatefulSet" not found

 

When i keep same policy configuration and remove one postgres cluster backup will successfully done.

Could you please check what is wrong.

Thanks

 

Hi there, I think the best way to protect CNPG would be to use the Kanister aspect of Veeam Kasten which is a way to instruct the operator to create a consistent backup of the database.

There is a community supported version here to get you started

 

https://github.com/michaelcourcy/kasten-cnpg


Hello, but this is not about consistent backup. Kasten use pgbackrest for consistent backup and this step is successfully done, but operation cannot be finished due to problem with "pgoDBStatefulSet" not found. Same operation with one postgres cluster on kubernetes works fine.


Forgive me then, if you are using an example Postgres blueprint then this will absolutely not work with CNPG as CNPG is an operator and not a Postgres deployment as per the nature of the example blueprint. 
 

if you have written your own blueprint then it would be handy to see the steps you have configured. 
 

Failing that if Kasten is picking up that you have Postgres I believe this would also not be operator aware and we should file a bug if this is the case. 
 

The only way to protect CNPG with Veeam Kasten is via a community blueprint. 


Kasten automatically detect our postgrescluster and for that created own blueprint call k10-pgo-bp-0.0.3. This blueprint kasten automatically use when i want create backup and restore. It works properly only in case that we have only one postgres cluster something is wrong in this defintion

  # Find backup repo 
              postgresql_sts_name=""

                            counter=0

                            while -z "$postgresql_sts_name" ] && $counter -lt 60 ]; do
                                postgresql_sts_name=$(kubectl get statefulsets -n {{ .Object.metadata.namespace }} \
                                    -o jsonpath='{range .items

  • }{@.metadata.name}|{@.metadata.labels.postgres-operator\.crunchydata\.com/cluster}{"\n"}{end}' | awk -F'|' "{ if (\$2 == \"{{ .Object.metadata.name }}\") {print  \$1}}" | head -n 1)
                                    sleep 5
                                    let "counter=$counter+1"
                                done
                  backup_repo=$(kubectl get postgrescluster {{ .Object.metadata.name
                  }} -n {{ .Object.metadata.namespace }}
                  -ojsonpath="{.spec.backups.pgbackrest.manual.repoName}")

                  kando output pgoBackupRepo $backup_repo

                  kando output pgoDBStatefulSet $postgresql_sts_name


  • Sorry, I have completely misread your initial comment.

     

    I read CNPG vs PGO. My mistake completely.


    For reference, the issue was caused by the blueprint responsible for the PGO backup. Deleting the problematic blueprint allows K10 to create a new one, or it can be replaced with a functional blueprint, as ​@JARTYMYK Tomas  & ​@Pavithra  have already done.

     


    Comment