when i try k10 prime script i have this error with portworx csi driver:
curl -s https://docs.kasten.io/tools/k10_primer.sh | bash /dev/stdin csi -s px-csi-replicated Using default user ID (1000) Namespace option not provided, using default namespace Checking for tools --> Found kubectl --> Found helm --> Found jq --> Found cat --> Found base64 --> Found tr Checking if the Kasten Helm repo is present --> The Kasten Helm repo was found Checking for required Helm version (>= v3.10.0) --> Helm binary version meet the requirements K10Primer image --> Using Image (gcr.io/kasten-images/k10tools:6.5.5) to run test Checking access to the Kubernetes context default --> Able to access the default Kubernetes namespace K10 Kanister tools image --> Using Kanister tools image (gcr.io/kasten-images/k10tools:6.5.5) to run test
Running K10Primer Job in cluster with command ./k10tools primer storage check csi serviceaccount/k10-primer created clusterrolebinding.rbac.authorization.k8s.io/k10-primer created job.batch/k10primer created Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod k10primer-9f4tc is in Pending phase Pod Ready! ================================================================ Using "K10_PRIMER_CONFIG_YAML" env var content as config source Using "K10_PRIMER_CONFIG_YAML" env var content as config source Found multiple snapshot API group versions, using preferred. Creating application -> Created pod (kubestr-csi-original-podqgj4k) and pvc (kubestr-csi-original-pvc4h2r7) Taking a snapshot Cleaning up resources CSI Snapshot Walkthrough: Using annotated VolumeSnapshotClass (px-csi-snapclass) Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvc4h2r7) in Namespace (default): Failed to create snapshot: failed to get input parameters to create snapshot for content snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89: "cannot get credentials for snapshot content \"snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89\"" - Error Error: {"message":"Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvc4h2r7) in Namespace (default): Failed to create snapshot: failed to get input parameters to create snapshot for content snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89: \"cannot get credentials for snapshot content \\\"snapcontent-573faf63-9a55-45c9-abb3-307c7f057a89\\\"\"","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":168,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:168"} ================================================================ serviceaccount "k10-primer" deleted clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted job.batch "k10primer" deleted
Have you and idea to help me ???
Thx :)
Page 1 / 1
@jaiganeshjk
@Vecteur IT Thanks for creating this topic.
From the error message, It seems that the csi-snapshotter is failing to create the snapshot in the storage backend due to the unavailability of the credentials/secret reference.
Do you have px-security enabled ? Does your volumesnapshotClass have fields like csi.storage.k8s.io/snapshotter-secret-name & csi.storage.k8s.io/snapshotter-secret-namespace under parameters ?
Failed to create snapshot content with error snapshot controller failed to
update px-csi-snapshot on API server: cannot get claim from snapshot
time: '2024-02-26T07:42:08Z'
readyToUse: false
spec:
source:
persistentVolumeClaimName: powerdns-pvc
volumeSnapshotClassName: px-csi-snapclass
@jaiganeshjk
i have removed the section:
parameters: ## Specify only if px-security is ENABLED csi.storage.k8s.io/snapshotter-secret-name: px-user-token csi.storage.k8s.io/snapshotter-secret-namespace: kube-system csi.openstorage.org/snapshot-type: local
i have retry snap, and status is the same :
status:
error:
message: >-
Failed to create snapshot content with error snapshot controller failed to
update px-csi-snapshot on API server: cannot get claim from snapshot
time: '2024-02-26T07:49:04Z'
readyToUse: false
@Vecteur IT The error message seems different. It says that the it cannot get the claim (which is persistentvolumeclaim here).
Are you creating the volumesnapshot in the same namespace as the PVC ?
I see that the volumesnapshot is in default namespace. And can you also confirm if the PVC powerdns-pvc is also in default namespace ?
@jaiganeshjk You're right, but the result is the same:
Using default user ID (1000) Namespace option not provided, using default namespace Checking for tools --> Found kubectl --> Found helm --> Found jq --> Found cat --> Found base64 --> Found tr Checking if the Kasten Helm repo is present --> The Kasten Helm repo was found Checking for required Helm version (>= v3.10.0) --> Helm binary version meet the requirements K10Primer image --> Using Image (gcr.io/kasten-images/k10tools:6.5.5) to run test Checking access to the Kubernetes context default --> Able to access the default Kubernetes namespace K10 Kanister tools image --> Using Kanister tools image (gcr.io/kasten-images/k10tools:6.5.5) to run test
Running K10Primer Job in cluster with command ./k10tools primer storage check csi serviceaccount/k10-primer created clusterrolebinding.rbac.authorization.k8s.io/k10-primer created job.batch/k10primer created Pod k10primer-4nghx is in Pending phase Pod k10primer-4nghx is in Pending phase Pod Ready! ================================================================ Using "K10_PRIMER_CONFIG_YAML" env var content as config source Using "K10_PRIMER_CONFIG_YAML" env var content as config source Found multiple snapshot API group versions, using preferred. Creating application -> Created pod (kubestr-csi-original-podjlbtq) and pvc (kubestr-csi-original-pvcl8grq) Taking a snapshot Cleaning up resources CSI Snapshot Walkthrough: Using annotated VolumeSnapshotClass (px-csi-snapclass) Failed to create duplicate snapshot from source. To skip check use '--skipcfs=true' option.: Failed to clone a VolumeSnapshotClass to use to restore the snapshot: Failed to create VolumeSnapshotClass: kubestr-clone-px-csi-snapclass: admission webhook "rke2-snapshot-validation-webhook.csi.kubernetes.io" denied the request: default snapshot class: px-csi-snapclass already exists for driver: pxd.portworx.com - Error Error: {"message":"Failed to create duplicate snapshot from source. To skip check use '--skipcfs=true' option.: Failed to clone a VolumeSnapshotClass to use to restore the snapshot: Failed to create VolumeSnapshotClass: kubestr-clone-px-csi-snapclass: admission webhook \"rke2-snapshot-validation-webhook.csi.kubernetes.io\" denied the request: default snapshot class: px-csi-snapclass already exists for driver: pxd.portworx.com","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":168,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:168"} ================================================================ serviceaccount "k10-primer" deleted clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted job.batch "k10primer" deleted
Hi,
I have resolved this problem:
I have removed annotation default snapshot class :)
And now it’s OK:
Checking for tools --> Found kubectl --> Found helm --> Found jq --> Found cat --> Found base64 --> Found tr Checking if the Kasten Helm repo is present --> The Kasten Helm repo was found Checking for required Helm version (>= v3.10.0) --> Helm binary version meet the requirements K10Primer image --> Using Image (gcr.io/kasten-images/k10tools:6.5.5) to run test Checking access to the Kubernetes context default --> Able to access the default Kubernetes namespace K10 Kanister tools image --> Using Kanister tools image (gcr.io/kasten-images/k10tools:6.5.5) to run test
Running K10Primer Job in cluster with command ./k10tools primer storage check csi serviceaccount/k10-primer created clusterrolebinding.rbac.authorization.k8s.io/k10-primer created job.batch/k10primer created Pod k10primer-zskfv is in Pending phase Pod k10primer-zskfv is in Pending phase Pod Ready! ================================================================ Using "K10_PRIMER_CONFIG_YAML" env var content as config source Using "K10_PRIMER_CONFIG_YAML" env var content as config source Found multiple snapshot API group versions, using preferred. Creating application -> Created pod (kubestr-csi-original-podlqlbs) and pvc (kubestr-csi-original-pvcxlbr8) Taking a snapshot -> Created snapshot (kubestr-snapshot-20240226103418) Restoring application -> Restored pod (kubestr-csi-cloned-podgdrpw) and pvc (kubestr-csi-cloned-pvcft6z4) Cleaning up resources CSI Snapshot Walkthrough: Using annotated VolumeSnapshotClass (px-csi-snapclass) Successfully tested snapshot restore functionality. - OK ================================================================ serviceaccount "k10-primer" deleted clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted job.batch "k10primer" deleted