Question

Failed to create snapshot CSI driver

  • 23 February 2024
  • 8 comments
  • 123 views

Userlevel 3

Hi,

when i try k10 prime script i have this error with portworx csi driver:

curl -s https://docs.kasten.io/tools/k10_primer.sh | bash /dev/stdin csi -s px-csi-replicated
Using default user ID (1000)
Namespace option not provided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
 --> Found jq
 --> Found cat
 --> Found base64
 --> Found tr
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm version (>= v3.10.0)
 --> Helm binary version meet the requirements
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10tools:6.5.5) to run test
Checking access to the Kubernetes context default
 --> Able to access the default Kubernetes namespace
K10 Kanister tools image
 --> Using Kanister tools image (gcr.io/kasten-images/k10tools:6.5.5) to run test

Running K10Primer Job in cluster with command
     ./k10tools primer storage check csi
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod k10primer-9f4tc is in Pending phase
Pod Ready!
================================================================
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
         Found multiple snapshot API group versions, using preferred.
Creating application
  -> Created pod (kubestr-csi-original-podqgj4k) and pvc (kubestr-csi-original-pvc4h2r7)
Taking a snapshot
Cleaning up resources
CSI Snapshot Walkthrough:
  Using annotated VolumeSnapshotClass (px-csi-snapclass)
  Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvc4h2r7) in Namespace (default): Failed to create snapshot: failed to get input parameters to create snapshot for content snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89: "cannot get credentials for snapshot content \"snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89\""  -  Error
Error: {"message":"Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvc4h2r7) in Namespace (default): Failed to create snapshot: failed to get input parameters to create snapshot for content snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89: \"cannot get credentials for snapshot content \\\"snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89\\\"\"","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":168,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:168"}
================================================================
serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted

 

Have you and idea to help me ???

Thx :)


8 comments

Userlevel 7
Badge +7

@jaiganeshjk 

Userlevel 6
Badge +2

@Vecteur IT Thanks for creating this topic.

From the error message, It seems that the csi-snapshotter is failing to create the snapshot in the storage backend due to the unavailability of the credentials/secret reference.

Do you have px-security enabled ? Does your volumesnapshotClass have fields like csi.storage.k8s.io/snapshotter-secret-namecsi.storage.k8s.io/snapshotter-secret-namespace under parameters ?

https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/csi/dataprotection.html#take-local-snapshots-of-csi-enabled-volumes

Userlevel 3

@jaiganeshjk Thanks for your response:

Here my volumesnapshotClass:

apiVersion: snapshot.storage.k8s.io/v1

kind: VolumeSnapshotClass

metadata:

annotations:

k10.kasten.io/is-snapshot-class: 'true'

kubectl.kubernetes.io/last-applied-configuration: >

{"apiVersion":"snapshot.storage.k8s.io/v1","deletionPolicy":"Delete","driver":"pxd.portworx.com","kind":"VolumeSnapshotClass","metadata":{"annotations":{"k10.kasten.io/is-snapshot-class":"true","snapshot.storage.kubernetes.io/is-default-class":"true"},"name":"px-csi-snapclass"},"parameters":{"csi.openstorage.org/snapshot-type":"local","csi.storage.k8s.io/snapshotter-secret-name":"px-user-token","csi.storage.k8s.io/snapshotter-secret-namespace":"kube-system"}}

snapshot.storage.kubernetes.io/is-default-class: 'true'

creationTimestamp: '2024-02-23T15:20:09Z'

generation: 1

managedFields:

- apiVersion: snapshot.storage.k8s.io/v1

fieldsType: FieldsV1

fieldsV1:

f:deletionPolicy: {}

f:driver: {}

f:metadata:

f:annotations:

.: {}

f:k10.kasten.io/is-snapshot-class: {}

f:kubectl.kubernetes.io/last-applied-configuration: {}

f:snapshot.storage.kubernetes.io/is-default-class: {}

f:parameters:

.: {}

f:csi.openstorage.org/snapshot-type: {}

f:csi.storage.k8s.io/snapshotter-secret-name: {}

f:csi.storage.k8s.io/snapshotter-secret-namespace: {}

manager: kubectl-client-side-apply

operation: Update

time: '2024-02-23T15:20:09Z'

name: px-csi-snapclass

resourceVersion: '480574012'

uid: 41a6a5dc-ac58-4017-a404-bb813861f38c

selfLink: /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/px-csi-snapclass

deletionPolicy: Delete

driver: pxd.portworx.com

parameters:

csi.openstorage.org/snapshot-type: local

csi.storage.k8s.io/snapshotter-secret-name: px-user-token

csi.storage.k8s.io/snapshotter-secret-namespace: kube-system

 

If i try portworx sample snapshot like this:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: px-csi-snapshot
spec:
volumeSnapshotClassName: px-csi-snapclass
source:
persistentVolumeClaimName: powerdns-pvc

 

snapshoot is well created but there is some errors:

 

apiVersion: snapshot.storage.k8s.io/v1

kind: VolumeSnapshot

metadata:

annotations:

kubectl.kubernetes.io/last-applied-configuration: >

{"apiVersion":"snapshot.storage.k8s.io/v1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"px-csi-snapshot","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"powerdns-pvc"},"volumeSnapshotClassName":"px-csi-snapclass"}}

creationTimestamp: '2024-02-26T07:42:08Z'

finalizers:

- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection

generation: 1

managedFields:

- apiVersion: snapshot.storage.k8s.io/v1

fieldsType: FieldsV1

fieldsV1:

f:metadata:

f:annotations:

.: {}

f:kubectl.kubernetes.io/last-applied-configuration: {}

f:spec:

.: {}

f:source:

.: {}

f:persistentVolumeClaimName: {}

f:volumeSnapshotClassName: {}

manager: kubectl-client-side-apply

operation: Update

time: '2024-02-26T07:42:08Z'

- apiVersion: snapshot.storage.k8s.io/v1

fieldsType: FieldsV1

fieldsV1:

f:metadata:

f:finalizers:

.: {}

v:"snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection": {}

manager: snapshot-controller

operation: Update

time: '2024-02-26T07:42:08Z'

- apiVersion: snapshot.storage.k8s.io/v1

fieldsType: FieldsV1

fieldsV1:

f:status:

.: {}

f:error:

.: {}

f:message: {}

f:time: {}

f:readyToUse: {}

manager: snapshot-controller

operation: Update

subresource: status

time: '2024-02-26T07:42:08Z'

name: px-csi-snapshot

namespace: default

resourceVersion: '483302641'

uid: e436d03c-b3b0-489a-a9ac-20d1ac08e1cf

selfLink: >-

/apis/snapshot.storage.k8s.io/v1/namespaces/default/volumesnapshots/px-csi-snapshot

labels:

k8slens-edit-resource-version: v1

status:

error:

message: >-

Failed to create snapshot content with error snapshot controller failed to

update px-csi-snapshot on API server: cannot get claim from snapshot

time: '2024-02-26T07:42:08Z'

readyToUse: false

spec:

source:

persistentVolumeClaimName: powerdns-pvc

volumeSnapshotClassName: px-csi-snapclass

Userlevel 3

@jaiganeshjk

i have removed the section:

parameters: ## Specify only if px-security is ENABLED
csi.storage.k8s.io/snapshotter-secret-name: px-user-token
csi.storage.k8s.io/snapshotter-secret-namespace: kube-system
csi.openstorage.org/snapshot-type: local

i  have retry snap, and status is the same :

status:

error:

message: >-

Failed to create snapshot content with error snapshot controller failed to

update px-csi-snapshot on API server: cannot get claim from snapshot

time: '2024-02-26T07:49:04Z'

readyToUse: false

Userlevel 6
Badge +2

@Vecteur IT The error message seems different. It says that the it cannot get the claim (which is persistentvolumeclaim here).

Are you creating the volumesnapshot in the same namespace as the PVC ?

I see that the volumesnapshot is in default namespace. And can you also confirm if the PVC powerdns-pvc is also in default namespace ?

Userlevel 3

 

@jaiganeshjk You're right, but the result is the same:

 

Userlevel 3

i have delete snap et redo:

 

apiVersion: snapshot.storage.k8s.io/v1

kind: VolumeSnapshot

metadata:

annotations:

kubectl.kubernetes.io/last-applied-configuration: >

{"apiVersion":"snapshot.storage.k8s.io/v1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"px-csi-snapshot","namespace":"powerdns"},"spec":{"source":{"persistentVolumeClaimName":"powerdns-pvc"},"volumeSnapshotClassName":"px-csi-snapclass"}}

creationTimestamp: '2024-02-26T10:06:01Z'

finalizers:

- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection

- snapshot.storage.kubernetes.io/volumesnapshot-bound-protection

- snapshot.storage.kubernetes.io/volumesnapshot-bound-protection

generation: 1

name: px-csi-snapshot

namespace: powerdns

resourceVersion: '483404575'

uid: 2e0ad72f-c9a0-4b99-bb86-b8c5f20d49b1

selfLink: >-

/apis/snapshot.storage.k8s.io/v1/namespaces/powerdns/volumesnapshots/px-csi-snapshot

status:

boundVolumeSnapshotContentName: snapcontent-2e0ad72f-c9a0-4b99-bb86-b8c5f20d49b1

creationTime: '2024-02-26T10:06:01Z'

readyToUse: true

restoreSize: 1Gi

spec:

source:

persistentVolumeClaimName: powerdns-pvc

volumeSnapshotClassName: px-csi-snapclass

 

I have errors but the snapshoot is ready ro use:   

readyToUse: true

 

So, i have retry k10 preflight :

 

Using default user ID (1000)
Namespace option not provided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
 --> Found jq
 --> Found cat
 --> Found base64
 --> Found tr
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm version (>= v3.10.0)
 --> Helm binary version meet the requirements
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10tools:6.5.5) to run test
Checking access to the Kubernetes context default
 --> Able to access the default Kubernetes namespace
K10 Kanister tools image
 --> Using Kanister tools image (gcr.io/kasten-images/k10tools:6.5.5) to run test

Running K10Primer Job in cluster with command
     ./k10tools primer storage check csi
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Pod k10primer-4nghx is in Pending phase
Pod k10primer-4nghx is in Pending phase
Pod Ready!
================================================================
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
         Found multiple snapshot API group versions, using preferred.
Creating application
  -> Created pod (kubestr-csi-original-podjlbtq) and pvc (kubestr-csi-original-pvcl8grq)
Taking a snapshot
Cleaning up resources
CSI Snapshot Walkthrough:
  Using annotated VolumeSnapshotClass (px-csi-snapclass)
  Failed to create duplicate snapshot from source. To skip check use '--skipcfs=true' option.: Failed to clone a VolumeSnapshotClass to use to restore the snapshot: Failed to create VolumeSnapshotClass: kubestr-clone-px-csi-snapclass: admission webhook "rke2-snapshot-validation-webhook.csi.kubernetes.io" denied the request: default snapshot class: px-csi-snapclass already exists for driver: pxd.portworx.com  -  Error
Error: {"message":"Failed to create duplicate snapshot from source. To skip check use '--skipcfs=true' option.: Failed to clone a VolumeSnapshotClass to use to restore the snapshot: Failed to create VolumeSnapshotClass: kubestr-clone-px-csi-snapclass: admission webhook \"rke2-snapshot-validation-webhook.csi.kubernetes.io\" denied the request: default snapshot class: px-csi-snapclass already exists for driver: pxd.portworx.com","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":168,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:168"}
================================================================
serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted

Userlevel 3

Hi,

I have resolved this problem:

I have removed annotation default snapshot class :)

 

And now it’s OK:

 

Checking for tools
 --> Found kubectl
 --> Found helm
 --> Found jq
 --> Found cat
 --> Found base64
 --> Found tr
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm version (>= v3.10.0)
 --> Helm binary version meet the requirements
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10tools:6.5.5) to run test
Checking access to the Kubernetes context default
 --> Able to access the default Kubernetes namespace
K10 Kanister tools image
 --> Using Kanister tools image (gcr.io/kasten-images/k10tools:6.5.5) to run test

Running K10Primer Job in cluster with command
     ./k10tools primer storage check csi
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Pod k10primer-zskfv is in Pending phase
Pod k10primer-zskfv is in Pending phase
Pod Ready!
================================================================
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
         Found multiple snapshot API group versions, using preferred.
Creating application
  -> Created pod (kubestr-csi-original-podlqlbs) and pvc (kubestr-csi-original-pvcxlbr8)
Taking a snapshot
  -> Created snapshot (kubestr-snapshot-20240226103418)
Restoring application
  -> Restored pod (kubestr-csi-cloned-podgdrpw) and pvc (kubestr-csi-cloned-pvcft6z4)
Cleaning up resources
CSI Snapshot Walkthrough:
  Using annotated VolumeSnapshotClass (px-csi-snapclass)
  Successfully tested snapshot restore functionality.  -  OK
================================================================
serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted

Comment