Skip to main content

Hi, when i follow thi logical mysql backup for openshift and the backup is normal, but when I use the backup point to restore the mysql, the database test have not restore which I create before backup.

my blueprint content as below:

apiVersion: cr.kanister.io/v1alpha1kind: Blueprintmetadata:  name: mysql-dep-config-blueprintactions:  backup:    outputArtifacts:      mysqlBackup:        # Capture the kopia snapshot information for subsequent actions        # The information includes the kopia snapshot ID which is essential for restore and delete to succeed        # `kopiaOutput` is the name provided to kando using `--output-name` flag        kopiaSnapshot: "{{ .Phases.dumpToStore.Output.kopiaOutput }}"    phases:      - func: KubeTask        name: dumpToStore        objects:          mysqlsecret:            kind: Secret            #name: "{{ .DeploymentConfig.Name }}"            name: mysql-password            #namespace: "{{ .DeploymentConfig.Namespace }}"            namespace: mjw-sfs        args:          #image: ghcr.io/kanisterio/mysql-sidecar:0.83.0          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.83.0          namespace: mjw-sfs          command:            - bash            - -o            - errexit            - -o            - pipefail            - -c            - |              backup_file_path="dump.sql"              root_password="{{ index .Phases.dumpToStore.Secrets.mysqlsecret.Data "password" | toString }}"              dump_cmd="mysqldump --column-statistics=0 -u root --password=${root_password} -h mysql-bk --single-transaction --all-databases"              ${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -  restore:    inputArtifactNames:      # The kopia snapshot info created in backup phase can be used here      # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`      - mysqlBackup    phases:      - func: KubeTask        name: restoreFromStore        objects:          mysqlsecret:            kind: Secret            name: mysql-password            namespace: mjw-sfs        args:          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.83.0          namespace: mjw-sfs          command:            - bash            - -o            - errexit            - -o            - pipefail            - -c            - |              backup_file_path="dump.sql"              kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}'              root_password="{{ index .Phases.restoreFromStore.Secrets.mysqlsecret.Data "password" | toString }}"              restore_cmd="mysql -u root --password=${root_password} -h mysql-bk"              kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - | ${restore_cmd}  delete:    inputArtifactNames:      # The kopia snapshot info created in backup phase can be used here      # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`      - mysqlBackup    phases:      - func: KubeTask        name: deleteFromStore        args:          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.83.0          namespace: mjw-sfs          command:            - bash            - -o            - errexit            - -o            - pipefail            - -c            - |              backup_file_path="dump.sql"              kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}'              kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"

kanister-svc log have an error as this snapshot below:

 

what is the root cause for this error?

 

@jaiganeshjk 


@meijianwei Thank you for posting this question.

From your screenshot, I could see that the phase `restoreFromStore` is completed. However, the progress tracking from the job is actually failing.

I am not sure about the exact reason for this. I see that you are in very old & probably unsupported version of K10 5.0.11.

My suggestion is to upgrade k10 to latest or n-1 version and see if you can run the blueprint based backup again.

If you still get the same issue, we can take a dig at debugging the issue by looking deeper into the logs.


@jaiganeshjk  thanks for your reply.

I already upgrade the kasten k10 from 5.0.11 to 5.5.6 use the commands “helm upgrade k10 /root/k10/k10-5.5.6.tgz --namespace=kasten-io -f /root/mjw/k10/test/k10_val.yaml”. 

but there are two pods fail like below screenshot.

detail error info for pod catalog-svc-5c7c6764f7-ntswr as below:

detail error info for pod metering-svc-667fb78dc4-k72cd as below:

yes, I know this reason is the volume pvc-14f8ae51-7f71-4966-871a-f38599165cd7 and pvc-c8849df6-25ea-45c1-8964-668ae7b5893b are deleted, because I changed the backend nas-130 to nas-131 and removed them manually。I dont know that k10 won’t create them automatically after upgrade or reboot. now how can i recover them? could you guide me? thank you very much.

Strangely,I do the same operates for pods jobs-svc-848d69b956-fmg7g and logging-svc-68d94d9b45-xm7dk, the two pods can recreate the volumes automatically after upgrade the k10, I don’t know why catalog-svc-5c7c6764f7-ntswr and metering-svc-667fb78dc4-k72cd can not?

PVC recreate alutomatically after upgrade k10.

 


Update.

When i remove the failed pods and after them restart, there is only one pod fail.

detail error info:

log from /var/log/containers/metering-svc-667fb78dc4-85vmt_kasten-io_upgrade-init-8691520d4b7657bf07a8a8c3125bef9b4e8fe7fda8afbf5ed102d824eb97c020.log:

Could you help me to fix this issue? thank you very much.


This is usually a problem with the storage backends which exposes client access to snapshot directory.

K10 tries to run chown recursively and fails since `.snapshot` directory is read-only

May I know what NAS backend you are using in this case ?

Is there a way you can disable client access snapshot directory for this volume ?

 


@jaiganeshjk thanks, The backend NAS is Huawei OceanStor Dorado 5600 V6 6.1.5。

I’ll check it. but I have not meet this problem in k10 5.0.11. I don’t know why it appear in k10 5.5.6.


Recent versions probably has improvements in how we handle the phonehome reports 


@jaiganeshjk I still roll back to k10 5.0.11, and fixed some issues which are met before.

Now, the backup looks fine.

But the restore fail, during restore i can see the restore-data pods as below screenshot.

The erro info as below:

Could you help me with this problem? thanks a lot.


@jaiganeshjk Update again and ignore that problem above。I installed k10 version 5.5.8 successfully。

But the mysql data is still not restored. Is there something wrong with any of my steps?below are detail steps。

  1. deploy statefulset and all pods’s state are running. 
  1. create a new database named test.
  1. backup the namespace k10-mysql in kasten k10
  1. restore to a new namespace names restore-2.
  1. the application restore-2 restored successfully.
  1. But the database name test create before is not restored?

 


why is snapshot here 0B?

 


The logical blueprint that you are using is an example created for mysql Standalone and not HA.

I don’t know how the data is synced between the replicas. You will have to keep it in mind how the HA works before backing it up.

 

Snapshot field computes the capacity of the local snapshots created by the CSI driver or the provider.

However, your backup was blueprint based and it is pushed to your location target. So it comes under object storage and not Snapshots.


I referred the example from the link as below.

https://github.com/kanisterio/kanister/tree/master/examples/mysql

But I don’t know why my mysql data is not restored?

By the way, how can I change the injectKanisterSidecar enabled to false in command? I want to verify  again use snapshot. 

 


 

 @jaiganeshjk I verifed that the backup and restore did not use injectKanisterSidecar just use snapshot. but the database data is still not restored.

Steps as below:

  1. Deplpoy mysql application and login in mysql.
  1. create database test and insert three records.

 

  1. Create policy on K10 dashboard and backup the namespace deploy-mysql and the result is fine.

During the buckup I can see the snapshot is created.

  1.  Then restore to the new namespace named restore-11 using the restore point of the backup.
Similarly, I can also see the restored snapshot when restoring.
  1. Check the pod status in namespace restore-11 and login in the mysql to check the database test. The result is the database test have not restored?

 

Could you help me with the problem?thank you very much.

my k8s is openshift  4.10.0-0.okd-2022-07-09-073606

Kubernetes Version is v1.23.5-rc.0.2076+8cfebb1ce4a59f-dirty

 

 

 

 

 


@meijianwei The regular snapshot should function correctly. Could you please provide the K10 values file and the YAML file of your MySQL statefulset?
how you disabled the injection of the Kanister sidecar?


@Hagag thanks for your response.

Both statefulset and deployment I have tested,but got the same result.

I disabled the injection of the Kanister sidecar by paramter “--set injectKanisterSidecar.enabled=true”. 

the yaml of statefulset and deployment as below:

apiVersion: v1
kind: Service
metadata:
  name: mysql-bk
  namespace: mysql-test
  labels:
    app: mysql
spec:
  type: NodePort
  selector:
    app: mysql
  ports:
  - port: 3306
    protocol: TCP
    targetPort: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-bk-sfs
  namespace: mysql-test
spec:
  selector:
    matchLabels:
      app: mysql # 必须匹配 .spec.template.metadata.labels
  serviceName: mysql-bk
  replicas: 3 # 默认值是 1
  minReadySeconds: 10 # 默认值是 0
  template:
    metadata:
      labels:
        app: mysql # 必须匹配 .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mysql
        image: registry.example.com:8443/mysql/mysql:8.0.26
        imagePullPolicy: IfNotPresent
        env:
          - name: MYSQL_ROOT_PASSWORD
            value: "123456"
        ports:
          - containerPort: 3306
            name: mysql-svc
        volumeMounts:
        - name: mysql-bk-pvc
          mountPath: /root/mjw
  volumeClaimTemplates:
  - metadata:
      name: mysql-bk-pvc
    spec:
      accessModes: "ReadWriteOnce" ]
      storageClassName: mysc
      resources:
        requests:
          storage: 100Gi 

============below is the yaml of deployment=================

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: deploy-mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: registry.example.com:8443/mysql/mysql:5.7
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "123456"
          ports:
            - containerPort: 3306
          volumeMounts:
            - mountPath: /root/mjw
              name: mysql-pvc-test
      volumes:
        - name: mysql-pvc-test
          persistentVolumeClaim:
            claimName: mysql-pvc-test

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc-test
  namespace: deploy-mysql
spec:
  accessModes:
    - ReadWriteOnce
  # 替换成你的StorageClass
  storageClassName: mysc
  resources:
    requests:
      storage: 50Gi
---

apiVersion: v1
kind: Service
metadata:
  name: mysql-svc
  namespace: deploy-mysql
  labels:
    app: mysql
spec:
  type: NodePort
  ports:
    - port: 3306
      protocol: TCP
      name: http
      nodePort: 32356
  selector:
    app: mysql

 

============below is the k10 value file content==============

global:
  airgapped:
    repository: registry.example.com:8443/kasten-images
  persistence:
    storageClass: mysc
injectKanisterSidecar:
  enabled: true
metering:
  mode: airgap


@meijianwei The "injectKanisterSidecar.enabled" parameter is utilized when performing a Generic Storage Backup and Restore and is not associated with the logical backup of MySQL.

 

It appears that you have confused two configurations. Earlier, you had mentioned that you were following a GitHub link to back up your database.(https://github.com/kanisterio/kanister/tree/master/examples/mysql)
The link employs Kanister, which is a tool for managing data protection workflows.

If you want to enable logical MySQL backup, I suggest using the link below. However, before proceeding, it's important to remove any previously created Artifacts and CRDs.

https://docs.kasten.io/latest/kanister/mysql/install.html?highlight=logical#logical-mysql-backup

To delete Artifacts
https://github.com/kanisterio/kanister/tree/master/examples/mysql#delete-the-artifacts
To delete CRDs
https://github.com/kanisterio/kanister/tree/master/examples/mysql#delete-crs


@Hagag Thanks for you guidance。But the database data is still not restored. Please help me to fix the problem.

First the Arifacts and CRDs are cleaned in my env.

Second, below steps are following the logical MySQL backup guide。

  1. Edit the blueprint file which get from the link .
https://raw.githubusercontent.com/kanisterio/kanister/0.90.0/examples/mysql-deploymentconfig/blueprint-v2/mysql-dep-config-blueprint.yaml
  1. create the blueprint into namespace kasten-io

 

  3. annotate the statefullset with the below annotations to instruct K10 to use this Blueprint.

kubectl --namespace mysql-test annotate statefulset/mysql-bk-sfs kanister.kasten.io/blueprint=mysql-dep-config-blueprint

and we can see the blueprint annotated in statfulset yaml.

  1. Insert three records in database table like below.

 

  1. Begain to backup the application on k10 dashbord.

 

  1. remove the databases test as a disaster。

 

  1. restore data using the backup point before.

 

  1. The mysql application is restored.
  1. check mysql database, but the database test is not restored.

 

below is my blueprint yaml file.

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
  name: mysql-dep-config-blueprint
  namespace: kasten-io
actions:
  backup:
    outputArtifacts:
      mysqlBackup:
        # Capture the kopia snapshot information for subsequent actions
        # The information includes the kopia snapshot ID which is essential for restore and delete to succeed
        # `kopiaOutput` is the name provided to kando using `--output-name` flag
        kopiaSnapshot: "{{ .Phases.dumpToStore.Output.kopiaOutput }}"
    phases:
      - func: KubeTask
        name: dumpToStore
        objects:
          mysqlsecret:
            kind: Secret
            #name: "{{ .DeploymentConfig.Name }}"
            name: mysql-password
            #namespace: "{{ .DeploymentConfig.Namespace }}"
            namespace: "{{ .StatefulSet.Namespace }}"
        args:
          #image: ghcr.io/kanisterio/mysql-sidecar:0.83.0
          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0
          namespace: "{{ .StatefulSet.Namespace }}"
          command:
            - bash
            - -o
            - errexit
            - -o
            - pipefail
            - -c
            - |
              backup_file_path="dump.sql"
              root_password="{{ index .Phases.dumpToStore.Secrets.mysqlsecret.Data "password" | toString }}"
              dump_cmd="mysqldump --column-statistics=0 -u root --password=${root_password} -h mysql-bk --single-transaction --all-databases"
              ${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -
  restore:
    inputArtifactNames:
      # The kopia snapshot info created in backup phase can be used here
      # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`
      - mysqlBackup
    phases:
      - func: KubeTask
        name: restoreFromStore
        objects:
          mysqlsecret:
            kind: Secret
            name: mysql-password
            namespace: "{{ .StatefulSet.Namespace }}"
        args:
          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0
          namespace: "{{ .StatefulSet.Namespace }}"
          command:
            - bash
            - -o
            - errexit
            - -o
            - pipefail
            - -c
            - |
              backup_file_path="dump.sql"
              kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}'
              root_password="{{ index .Phases.restoreFromStore.Secrets.mysqlsecret.Data "password" | toString }}"
              restore_cmd="mysql -u root --password=${root_password} -h mysql-bk"
              kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - | ${restore_cmd}
  delete:
    inputArtifactNames:
      # The kopia snapshot info created in backup phase can be used here
      # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`
      - mysqlBackup
    phases:
      - func: KubeTask
        name: deleteFromStore
        args:
          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0
          namespace: "{{ .StatefulSet.Namespace }}"
          command:
            - bash
            - -o
            - errexit
            - -o
            - pipefail
            - -c
            - |
              backup_file_path="dump.sql"
              kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}'
              kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"
 

 

 


@Hagag 

logs from pod controllermanager-svc-594969bbb6-kcrmx, there is a error showed as below screenshot.

Does this error affect the backup and restore?

 


@meijianwei I have successfully reproduced the issue. It appears to occur when using replicas in your workload. I will need some time to investigate this behavior and provide you with further information. In the meantime, a possible workaround is to remove the annotation from your workload, such as StatefulSets ( to disable blueprint usage ) , and perform a normal backup

kubectl --namespace mysql-test annotate statefulset/mysql-bk-sfs kanister.kasten.io/blueprint=mysql-dep-config-blueprint-



P.S The blueprint will work with one replica
 


@meijianwei  P.S. When attempting to connect to your MySQL host in the blueprint, you are connecting through the service cluster IP which acts as a load balancer between your workload endpoints. However, this does not ensure that you are connecting to the host containing the test database. Prior to utilizing the blueprint, it is necessary to manage the database synchronization between the endpoints.


@Hagag thanks very much. By the way, how can I manage the database synchronization between the endpoints? Could you give me a guide? 

My current situation is that the test database cannot be restored no matter whether use blueprint or not.


Comment