Skip to main content
Question

Logical Backups to NFS File Storage Location failed


Hi guys,

When i configure location profile in NFS, then blueprint run fail.

location profile configed as below snapshot.

https://uploads-eu-west-1.insided.com/veeam-en/attachment/306d57ee-f5fa-4c0f-8352-73a6d046313d.png

 

Error information from logs like below, it looks like unsupport the location type. but the guide https://docs.kasten.io/latest/kanister/testing.html#configuring-a-profile is support.
 

https://uploads-eu-west-1.insided.com/veeam-en/attachment/49cd44fc-7f4d-4399-a725-4a554dd9457a.png

 

my blueprint contents as below and registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0 is my private repo.

 

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
  name: mysql-blueprint
  namespace: kasten-io
actions:
  backup:
    outputArtifacts:
      mysqlBackup:
        # Capture the kopia snapshot information for subsequent actions
        # The information includes the kopia snapshot ID which is essential for restore and delete to succeed
        # `kopiaOutput` is the name provided to kando using `--output-name` flag
        kopiaSnapshot: "{{ .Phases.dumpToStore.Output.kopiaOutput }}"
    phases:
      - func: KubeTask
        name: dumpToStore
        objects:
          mysqlsecret:
            kind: Secret
            #name: "{{ .DeploymentConfig.Name }}"
            name: mysql-password
            #namespace: "{{ .Deployment.Namespace }}"
            #namespace: "{{ .StatefulSet.Namespace }}"
            namespace: mysql-0614
        args:
          #image: ghcr.io/kanisterio/mysql-sidecar:0.83.0
          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0
          #namespace: "{{ .StatefulSet.Namespace }}"
          namespace: mysql-0614
          command:
            - bash
            - -o
            - errexit
            - -o
            - pipefail
            - -c
            - |
              backup_file_path="dump.sql"
              root_password="{{ index .Phases.dumpToStore.Secrets.mysqlsecret.Data "password" | toString }}"
              dump_cmd="mysqldump --column-statistics=0 -u root --password=${root_password} -h {{ index .Object.metadata.labels "app.kubernetes.io/instance" }} --single-transaction --all-databases"
              ${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -
  restore:
    inputArtifactNames:
      # The kopia snapshot info created in backup phase can be used here
      # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`
      - mysqlBackup
    phases:
      - func: KubeTask
        name: restoreFromStore
        objects:
          mysqlsecret:
            kind: Secret
            name: mysql-password
            namespace: mysql-0614
        args:
          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0
          namespace: mysql-0614
          command:
            - bash
            - -o
            - errexit
            - -o
            - pipefail
            - -c
            - |
              backup_file_path="dump.sql"
              kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}'
              root_password="{{ index .Phases.restoreFromStore.Secrets.mysqlsecret.Data "password" | toString }}"
              restore_cmd="mysql -u root --password=${root_password} -h {{ index .Object.metadata.labels "app.kubernetes.io/instance" }}"
              kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - | ${restore_cmd}
  delete:
    inputArtifactNames:
      # The kopia snapshot info created in backup phase can be used here
      # Use the `--kopia-snapshot` flag in kando to pass in `mysqlBackup.KopiaSnapshot`
      - mysqlBackup
    phases:
      - func: KubeTask
        name: deleteFromStore
        args:
          image: registry.example.com:8443/kasten-images/mysql-sidecar:0.90.0
          namespace: "{{ .Deployment.Namespace }}"
          command:
            - bash
            - -o
            - errexit
            - -o
            - pipefail
            - -c
            - |
              backup_file_path="dump.sql"
              kopia_snap='{{ .ArtifactsIn.mysqlBackup.KopiaSnapshot }}'
              kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"

9 comments

FRubens
Forum|alt.badge.img+2
  • Experienced User
  • 96 comments
  • June 14, 2023

Hello @meijianwei

Thank you for using our K10 community!

I am checking regarding the kando tool support for NFS profile and will get back to you.

 

FRubens


Hagag
Forum|alt.badge.img+2
  • Experienced User
  • 154 comments
  • June 14, 2023


Hello @meijianwei  

The logical backup feature now supports the NFS location type (FileStore), and backups should be successfully sent to NFS location profiles. I have personally tested this functionality and it is working as intended.

It appears that there may be an issue with the Persistent Volume Claim (PVC) you created for the NFS. please provide the list of PVCs you have in Kasten-io and the YAML files for your NFS PVC/PV.
also the profile YAML file and the k10 version being used.


  • Author
  • Comes here often
  • 22 comments
  • June 14, 2023

@Hagag thanks,

The PVCs list are like below:

The pvc named k10-pvc-location-0612 which is used for the NFS. 

YAMl for it as below snapshot:

 

K10 version is 5.5.8.

 


  • Author
  • Comes here often
  • 22 comments
  • June 14, 2023

profile yaml as below:

 


Hagag
Forum|alt.badge.img+2
  • Experienced User
  • 154 comments
  • June 14, 2023

@meijianwei  

 

Upon reviewing the YAML file you provided for the NFS PVC, you will notice the presence of the "volumename" parameter. This parameter indicates that the Persistent Volume (PV) bound to the PVC was dynamically created.

Here is the issue:

 

To address the issue, you will need to create a new Persistent Volume (PV) using NFS, following a similar example provided below. Eventually, the PVC should be successfully bound to the newly created PV.

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
   name: test-pv-nfs
spec:
   capacity:
      storage: 2Gi
   volumeMode: Filesystem
   accessModes:
      - ReadWriteMany
   persistentVolumeReclaimPolicy: Retain
   storageClassName: nfs
   mountOptions:
      - hard
      - nfsvers=4.1
   nfs:
      path: /srv/nfs/kubedata
      server: 10.10.10.15


PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: nfs-pvc
   namespace: kasten-io
spec:
   storageClassName: nfs
   accessModes:
      - ReadWriteMany
   resources:
      requests:
         storage: 2Gi


Here is the bound example from my test environment 

 

 


  • Author
  • Comes here often
  • 22 comments
  • June 14, 2023

@Hagag Thanks, May not have found the root cause of the problem. I follow you guide to recreate the pv and pvc, and reconfig the location profile on k10 dashboard, then make a backup with blueprint, the same error accurred.

my PV:


apiVersion: v1
kind: PersistentVolume
metadata:
   name: test-pv-nfs
spec:
   capacity:
      storage: 100Gi
   volumeMode: Filesystem
   accessModes:
      - ReadWriteMany
   persistentVolumeReclaimPolicy: Retain
   storageClassName: mysc
   mountOptions:
      - hard
      - nfsvers=4.1
   nfs:
      path: /mysql/nfs/location/data
      server: 20.20.20.130

PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: nfs-location-pvc
   namespace: kasten-io
spec:
   storageClassName: mysc
   accessModes:
      - ReadWriteMany
   resources:
      requests:
         storage: 100Gi
   volumeName: test-pv-nfs

Bound pvc and pv

create new location profile on k10 dashboard

 

create policy to backup application.

 

run once policy and get the log as below:

 

logs from kanister-svc:

 


Hagag
Forum|alt.badge.img+2
  • Experienced User
  • 154 comments
  • June 14, 2023

Hello @meijianwei 

 

In addition to the incorrect configuration of the NFS volume, the policy you provided was instrumental in pinpointing the issue. By reproducing the problem, I managed to identify the root cause. The issue stems from the utilization of "Kanister Execution Hooks" and deploying the same blueprint for actions before taking a snapshot.

 


 



It appears that in this scenario, the profile passed to the kando command is set to NULL, as visible in the screenshot below.
it explains the error  “Unsupported Location type”

 


To resolve this, you will need to disable the "Kanister Execution Hooks" in the policy

 



 


  • Author
  • Comes here often
  • 22 comments
  • June 15, 2023

@Hagag Thanks for your patience and careful analysis. the error “Unsupported Location type” is solved. But another issue met when perform backup。

Error message from k10 dashboard.

 


Hagag
Forum|alt.badge.img+2
  • Experienced User
  • 154 comments
  • June 22, 2023

@meijianwei 

I'm pleased to hear that the initial problem has been resolved. However, the logs you provided for the second issue are insufficient. It would be helpful if you could run Kubectl describe pod before it is terminated. To further investigate and troubleshoot the issue, it would be advisable to open a support case and share the debug logs.

Additionally, you can attempt to examine the logs of the Kanister-svc and Executor-svc pods.


Comment