Skip to main content

Hello!

I am testing Kasten 6.5.12 in my EKS v1.29.1-eks-b9c9ed7 environment and I need some help:

My app has 2 PVCs: ebs-claim (ebs-sc provisioner ebs.csi.aws.com) and efs-claim (efs-sc provisioner efs.csi.aws.com)

When I exclude EFS efs-claim from the policy it runs, and it exports the backup to my S3 Bucket.

So EBS snapshot is working fine, but when it tries to backup EFS I get the following error:

 

Error details: 

- cause:
    cause:
      cause:
        cause:
          cause:
            fields:
              - name: storageClassName
                value: efs-sc
            file: kasten.io/k10/kio/kube/volume.go:659
            function: kasten.io/k10/kio/kube.CreatePVandPVCClone
            linenumber: 659
            message: Unsupported storageclass for pv/pvc clone
          file: kasten.io/k10/kio/exec/phases/phase/data_manager.go:176
          function: kasten.io/k10/kio/exec/phases/phase.(*SharedVolumeSnapshotManager).SnapshotCreate
          linenumber: 176
          message: Failed to clone PV/PVC
        fields:
          - name: pvcName
            value: efs-claim
          - name: namespace
            value: appmasker
        file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:862
        function: kasten.io/k10/kio/exec/phases/backup.basicVolumeSnapshot.basicVolumeSnapshot.func1.func2
        linenumber: 862
        message: Error snapshotting volume
      fields:
        - name: appName
          value: mask-frontend
        - name: appType
          value: statefulset
        - name: namespace
          value: appmasker
      file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:873
      function: kasten.io/k10/kio/exec/phases/backup.basicVolumeSnapshot
      linenumber: 873
      message: Failed to snapshot volumes
    file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:392
    function: kasten.io/k10/kio/exec/phases/backup.processVolumeArtifacts
    linenumber: 392
    message: Failed snapshots for workload
  message: Job failed to be executed

 

I have a valid Infrastructure Profile:

EFS ACCESS: Enabled (kasten even created a backup vault in AWS Backup named k10vault)

EBS ACCESS: Enabled

EBS DIRECT: Enabled

STATUS: Valid

 

I have a valid Location Profile

and had no issues running curl -s https://docs.kasten.io/tools/k10_primer.sh | bash:

 

Running K10Primer Job in cluster with command
     ./k10tools primer
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Pod Ready!
================================================================
Kubernetes Version Check:
  Valid kubernetes version (v1.29.1-eks-b9c9ed7)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

CSI Capabilities Check:
  Using CSI GroupVersion snapshot.storage.k8s.io/v1  -  OK

Validating Provisioners:
ebs.csi.aws.com:
  Is a CSI Provisioner  -  OK
  Storage Classes:
    ebs-sc
      Valid Storage Class  -  OK
  Volume Snapshot Classes:
    ebs-snapshot-class
      Has k10.kasten.io/is-snapshot-class annotation set to true  -  OK
      Has deletionPolicy 'Delete'  -  OK
    k10-clone-ebs-snapshot-class

efs.csi.aws.com:
  Storage Classes:
    efs-sc
      Valid Storage Class  -  OK

kubernetes.io/aws-ebs:
  Storage Classes:
    gp2
      Valid Storage Class  -  OK

Validate Generic Volume Snapshot:
  Pod created successfully  -  OK
  GVS Backup command executed successfully  -  OK
  Pod deleted successfully  -  OK
=============================================================

 

In system information I have:

Storage class gp2: Valid

Storage class ebs-sc: Valid 

Storage class efs-sc: Failed 

 

PROVISIONER efs.csi.aws.com

RECLAIM POLICY Delete

VOLUME BINDING MODE Immediate

ALLOW VOLUME EXPANSION false

K10 SNAPSHOT TYPE GVS

Status failed: 

failed to create pod ({"message":"failed while waiting for PVC to be ready","function":"kasten.io/k10/kio/tools/k10primer.(*gvsPodOperator).createAndWait","linenumber":157,"file":"kasten.io/k10/kio/tools/k10primer/validate_gvs.go:157","fields":e{"name":"pvc","value":"kanister-tools-zl5z2"}],"cause":{"message":"found issues creating PVC","function":"kasten.io/k10/kio/tools/k10primer.(*gvsPodOperator).waitForPVCReadyOrCheckEventIssues","linenumber":289,"file":"kasten.io/k10/kio/tools/k10primer/validate_gvs.go:289","cause":{"message":"failed to provision volume with StorageClass \"efs-sc\": rpc error: code = InvalidArgument desc = Missing provisioningMode parameter"}}})

@joao.neri are statically or dynamically provisioned EFS volumes?

if it is dynamically provisioned EFS volumes, please review and follow the below link
https://docs.kasten.io/latest/install/shareable-volume.html#shareable-volume-backup-and-restore


also please share the YAML file of the EFS storage class and the volume (PV ) you are trying to backup.

Thanks
Ahmed Hagag


one more thing, what is the type of your EKS nodes, is it Fargate?

 


Hi @Hagag it is not fargate.

it is dynamically provisioned. Do I need to deploy the GSB sidecar approach ? 

GSB feature is disabled by default right ?

Thank you very much

 

YAML and details below:

 

➜  kasten: kubectl describe pv efs-pv
Name:            efs-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      rkubernetes.io/pv-protection]
StorageClass:    efs-sc
Status:          Bound
Claim:           appmasker/efs-claim
Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        40Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            efs.csi.aws.com
    FSType:
    VolumeHandle:      fs-06fbe2435974747fa::fsap-076f8a67471ebe0b0
    ReadOnly:          false
    VolumeAttributes:  <none>
Events:                <none>

 

EFS SC yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    k10.kasten.io/revalidated-at: '{"Status":"Failed","LastUpdateTime":"2024-04-24T19:27:17Z","Errors":a"failed
      to create pod ({\"message\":\"failed while waiting for PVC to be ready\",\"function\":\"kasten.io/k10/kio/tools/k10primer.(*gvsPodOperator).createAndWait\",\"linenumber\":157,\"file\":\"kasten.io/k10/kio/tools/k10primer/validate_gvs.go:157\",\"fields\":\{\"name\":\"pvc\",\"value\":\"kanister-tools-zl5z2\"}],\"cause\":{\"message\":\"found
      issues creating PVC\",\"function\":\"kasten.io/k10/kio/tools/k10primer.(*gvsPodOperator).waitForPVCReadyOrCheckEventIssues\",\"linenumber\":289,\"file\":\"kasten.io/k10/kio/tools/k10primer/validate_gvs.go:289\",\"cause\":{\"message\":\"failed
      to provision volume with StorageClass \\\"efs-sc\\\": rpc error: code = InvalidArgument
      desc = Missing provisioningMode parameter\"}}})"]}'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"efs-sc"},"parameters":{"directoryPerms":"777","gidRangeStart":"1000"},"provisioner":"efs.csi.aws.com"}
  creationTimestamp: "2024-03-08T21:36:12Z"
  name: efs-sc
  resourceVersion: "14271902"
  uid: e9bd0f43-9540-42da-8a34-6e3f776b5d88
parameters:
  directoryPerms: "777"
  gidRangeStart: "1000"
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

 

EFS PV yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":/"ReadWriteMany"],"capacity":{"storage":"40Gi"},"csi":{"driver":"efs.csi.aws.com","volumeHandle":"fs-06fbe4235974477fa::fsap-076f8a67741ebe0b0"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"efs-sc","volumeMode":"Filesystem"}}
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2024-03-08T21:36:12Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: efs-pv
  resourceVersion: "8730411"
  uid: 6bbfd1e8-3032-44b3-b1a0-23b8a6a3e3ee
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 40Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: efs-claim
    namespace: appmasker
    resourceVersion: "8730407"
    uid: a938433c-8841-46d7-b343-ec956390d41a
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-06fbe4235974477fa::fsap-076f8a67741ebe0b0
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  volumeMode: Filesystem
status:
  lastPhaseTransitionTime: "2024-04-06T17:52:16Z"
  phase: Bound


Hi @joao.neri  

There are two crucial points to highlight here: firstly, deploying the GSB sidecar approach is essential for the dynamic EFS volume. 

https://docs.kasten.io/latest/install/shareable-volume.html#shareable-volume-backup-and-restore

Secondly, you need to configure the storageclass with an additional field called "filesystemID" under the parameters section. We only use it to derive if the storageclass is dynamic or not, which subsequently influences K10's decision regarding the backup workflow to be followed.
 

To retrieve your Amazon EFS file system ID. You can find this in the Amazon EFS console, or use the following AWS CLI command.

aws efs describe-file-systems --query "FileSystems
  • .FileSystemId" --output text
  • more details on how to create the SC can be found here

    https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/dynamic_provisioning/README.md


    Thanks
    Ahmed Hagag


    Thank you @Hagag 

    But unfortunately I need an activation token to enable GSB

    https://docs.kasten.io/latest/install/generic.html#generic-kanister


    @joao.neri 

    we can generate the token for you.
    Would you mind opening a case with us through `my.veeam.com` and using `Kasten by veeam K10 Trial` in products while opening a case? 


    Thanks
    Ahmed Hagag


    These parameters in SC fixed the issue:

    parameters:
      fileSystemId: fs-xxxx
      provisioningMode: efs-ap

     

    Thanks!!!


    Comment