Hi, when i follow thi logical mysql backup for openshift and the backup is normal, but when I use the backup point to restore the mysql, the database test have not restore which I create before backup.
my blueprint content as below:
apiVersion: cr.kanister.io/v1alpha1kind: Blueprintmetadata: name: mysql-dep-config-blueprintactions: backup: outputArtifacts: mysqlBackup: # Capture the kopia snapshot information for subsequent actions # The information includes the kopia snapshot ID which is essential for restore and delete to succeed # `kopiaOutput` is the name provided to kando using `--output-name` flagkopiaSnapshot: "{{ .Phases.dumpToStore.Output.kopiaOutput }}" phases: - func: KubeTask name: dumpToStore objects: mysqlsecret: kind: Secret #name: "{{ .DeploymentConfig.Name }}"name: mysql-password #namespace: "{{ .DeploymentConfig.Namespace }}"namespace: mjw-sfs args: #image: ghcr.io/kanisterio/mysql-sidecar:0.83.0image: registry.example.com:8443/kasten-images/mysql-sidecar:0.83.0 namespace: mjw-sfs command: - bash - -o - errexit - -o - pipefail - -c - | backup_file_path="dump.sql" root_password="{{ index .Phases.dumpToStore.Secrets.mysqlsecret.Data "password" | toString }}" dump_cmd="mysqldump --column-statistics=0 -u root --password=${root_password} -h mysql-bk --single-transaction --all-databases" ${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" - restore: inputArtifactNames: # The kopia snapshot info created in backup phase can be used here # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`- mysqlBackup phases: - func: KubeTask name: restoreFromStore objects: mysqlsecret: kind: Secret name: mysql-password namespace: mjw-sfs args: image: registry.example.com:8443/kasten-images/mysql-sidecar:0.83.0 namespace: mjw-sfs command: - bash - -o - errexit - -o - pipefail - -c - | backup_file_path="dump.sql" kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}' root_password="{{ index .Phases.restoreFromStore.Secrets.mysqlsecret.Data "password" | toString }}" restore_cmd="mysql -u root --password=${root_password} -h mysql-bk" kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - | ${restore_cmd} delete: inputArtifactNames: # The kopia snapshot info created in backup phase can be used here # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`- mysqlBackup phases: - func: KubeTask name: deleteFromStore args: image: registry.example.com:8443/kasten-images/mysql-sidecar:0.83.0 namespace: mjw-sfs command: - bash - -o - errexit - -o - pipefail - -c - | backup_file_path="dump.sql" kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}' kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"
kanister-svc log have an error as this snapshot below:
what is the root cause for this error?
Page 1 / 1
@jaiganeshjk
@meijianwei Thank you for posting this question.
From your screenshot, I could see that the phase `restoreFromStore` is completed. However, the progress tracking from the job is actually failing.
I am not sure about the exact reason for this. I see that you are in very old & probably unsupported version of K10 5.0.11.
My suggestion is to upgrade k10 to latest or n-1 version and see if you can run the blueprint based backup again.
If you still get the same issue, we can take a dig at debugging the issue by looking deeper into the logs.
@jaiganeshjk thanks for your reply.
I already upgrade the kasten k10 from 5.0.11 to 5.5.6 use the commands “helm upgrade k10 /root/k10/k10-5.5.6.tgz --namespace=kasten-io -f /root/mjw/k10/test/k10_val.yaml”.
but there are two pods fail like below screenshot.
detail error info for pod catalog-svc-5c7c6764f7-ntswr as below:
detail error info for pod metering-svc-667fb78dc4-k72cd as below:
yes, I know this reason is the volume pvc-14f8ae51-7f71-4966-871a-f38599165cd7 and pvc-c8849df6-25ea-45c1-8964-668ae7b5893b are deleted, because I changed the backend nas-130 to nas-131 and removed them manually。I dont know that k10 won’t create them automatically after upgrade or reboot. now how can i recover them? could you guide me? thank you very much.
Strangely,I do the same operates for pods jobs-svc-848d69b956-fmg7g and logging-svc-68d94d9b45-xm7dk, the two pods can recreate the volumes automatically after upgrade the k10, I don’t know why catalog-svc-5c7c6764f7-ntswr and metering-svc-667fb78dc4-k72cd can not?
PVC recreate alutomatically after upgrade k10.
Update.
When i remove the failed pods and after them restart, there is only one pod fail.
detail error info:
log from /var/log/containers/metering-svc-667fb78dc4-85vmt_kasten-io_upgrade-init-8691520d4b7657bf07a8a8c3125bef9b4e8fe7fda8afbf5ed102d824eb97c020.log:
Could you help me to fix this issue? thank you very much.
This is usually a problem with the storage backends which exposes client access to snapshot directory.
K10 tries to run chown recursively and fails since `.snapshot` directory is read-only
May I know what NAS backend you are using in this case ?
Is there a way you can disable client access snapshot directory for this volume ?
@jaiganeshjk thanks, The backend NAS is Huawei OceanStor Dorado 5600 V6 6.1.5。
I’ll check it. but I have not meet this problem in k10 5.0.11. I don’t know why it appear in k10 5.5.6.
Recent versions probably has improvements in how we handle the phonehome reports
@jaiganeshjk I still roll back to k10 5.0.11, and fixed some issues which are met before.
Now, the backup looks fine.
But the restore fail, during restore i can see the restore-data pods as below screenshot.
The erro info as below:
Could you help me with this problem? thanks a lot.
@jaiganeshjk Update again and ignore that problem above。I installed k10 version 5.5.8 successfully。
But the mysql data is still not restored. Is there something wrong with any of my steps?below are detail steps。
deploy statefulset and all pods’s state are running.
create a new database named test.
backup the namespace k10-mysql in kasten k10
restore to a new namespace names restore-2.
the application restore-2 restored successfully.
But the database name test create before is not restored?
why is snapshot here 0B?
The logical blueprint that you are using is an example created for mysql Standalone and not HA.
I don’t know how the data is synced between the replicas. You will have to keep it in mind how the HA works before backing it up.
Snapshot field computes the capacity of the local snapshots created by the CSI driver or the provider.
However, your backup was blueprint based and it is pushed to your location target. So it comes under object storage and not Snapshots.
But I don’t know why my mysql data is not restored?
By the way, how can I change the injectKanisterSidecar enabled to false in command? I want to verify again use snapshot.
@jaiganeshjk I verifed that the backup and restore did not use injectKanisterSidecar just use snapshot. but the database data is still not restored.
Steps as below:
Deplpoy mysql application and login in mysql.
create database test and insert three records.
Create policy on K10 dashboard and backup the namespace deploy-mysql and the result is fine.
During the buckup I can see the snapshot is created.
Then restore to the new namespace named restore-11 using the restore point of the backup.
Similarly, I can also see the restored snapshot when restoring.
Check the pod status in namespace restore-11 and login in the mysql to check the database test. The result is the database test have not restored?
Could you help me with the problem?thank you very much.
my k8s is openshift 4.10.0-0.okd-2022-07-09-073606
Kubernetes Version is v1.23.5-rc.0.2076+8cfebb1ce4a59f-dirty
@meijianwei The regular snapshot should function correctly. Could you please provide the K10 values file and the YAML file of your MySQL statefulset? how you disabled the injection of the Kanister sidecar?
@Hagag thanks for your response.
Both statefulset and deployment I have tested,but got the same result.
I disabled the injection of the Kanister sidecar by paramter “--set injectKanisterSidecar.enabled=true”.
@meijianwei The "injectKanisterSidecar.enabled" parameter is utilized when performing a Generic Storage Backup and Restore and is not associated with the logical backup of MySQL.
It appears that you have confused two configurations. Earlier, you had mentioned that you were following a GitHub link to back up your database.(https://github.com/kanisterio/kanister/tree/master/examples/mysql) The link employs Kanister, which is a tool for managing data protection workflows.
If you want to enable logical MySQL backup, I suggest using the link below. However, before proceeding, it's important to remove any previously created Artifacts and CRDs.
and we can see the blueprint annotated in statfulset yaml.
Insert three records in database table like below.
Begain to backup the application on k10 dashbord.
remove the databases test as a disaster。
restore data using the backup point before.
The mysql application is restored.
check mysql database, but the database test is not restored.
below is my blueprint yaml file.
apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: mysql-dep-config-blueprint namespace: kasten-io actions: backup: outputArtifacts: mysqlBackup: # Capture the kopia snapshot information for subsequent actions # The information includes the kopia snapshot ID which is essential for restore and delete to succeed # `kopiaOutput` is the name provided to kando using `--output-name` flag kopiaSnapshot: "{{ .Phases.dumpToStore.Output.kopiaOutput }}" phases: - func: KubeTask name: dumpToStore objects: mysqlsecret: kind: Secret #name: "{{ .DeploymentConfig.Name }}" name: mysql-password #namespace: "{{ .DeploymentConfig.Namespace }}" namespace: "{{ .StatefulSet.Namespace }}" args: #image: ghcr.io/kanisterio/mysql-sidecar:0.83.0 image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0 namespace: "{{ .StatefulSet.Namespace }}" command: - bash - -o - errexit - -o - pipefail - -c - | backup_file_path="dump.sql" root_password="{{ index .Phases.dumpToStore.Secrets.mysqlsecret.Data "password" | toString }}" dump_cmd="mysqldump --column-statistics=0 -u root --password=${root_password} -h mysql-bk --single-transaction --all-databases" ${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" - restore: inputArtifactNames: # The kopia snapshot info created in backup phase can be used here # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot` - mysqlBackup phases: - func: KubeTask name: restoreFromStore objects: mysqlsecret: kind: Secret name: mysql-password namespace: "{{ .StatefulSet.Namespace }}" args: image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0 namespace: "{{ .StatefulSet.Namespace }}" command: - bash - -o - errexit - -o - pipefail - -c - | backup_file_path="dump.sql" kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}' root_password="{{ index .Phases.restoreFromStore.Secrets.mysqlsecret.Data "password" | toString }}" restore_cmd="mysql -u root --password=${root_password} -h mysql-bk" kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - | ${restore_cmd} delete: inputArtifactNames: # The kopia snapshot info created in backup phase can be used here # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot` - mysqlBackup phases: - func: KubeTask name: deleteFromStore args: image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0 namespace: "{{ .StatefulSet.Namespace }}" command: - bash - -o - errexit - -o - pipefail - -c - | backup_file_path="dump.sql" kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}' kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"
@Hagag
logs from pod controllermanager-svc-594969bbb6-kcrmx, there is a error showed as below screenshot.
Does this error affect the backup and restore?
@meijianwei I have successfully reproduced the issue. It appears to occur when using replicas in your workload. I will need some time to investigate this behavior and provide you with further information. In the meantime, a possible workaround is to remove the annotation from your workload, such as StatefulSets ( to disable blueprint usage ) , and perform a normal backup
@meijianwei P.S. When attempting to connect to your MySQL host in the blueprint, you are connecting through the service cluster IP which acts as a load balancer between your workload endpoints. However, this does not ensure that you are connecting to the host containing the test database. Prior to utilizing the blueprint, it is necessary to manage the database synchronization between the endpoints.
@Hagag thanks very much. By the way, how can I manage the database synchronization between the endpoints? Could you give me a guide?
My current situation is that the test database cannot be restored no matter whether use blueprint or not.