Hi @Madi.Cristil @safiya ...probably best to move this post into the Kasten Support group. Thanks.Â
HiÂ
can anyone please help me to resolve the above posted issue.
Thanks
Hello @vishnuvardhan4885,
Â
Would it be possible to run the k10 primer against your storageclass?
Â
curl -s https://docs.kasten.io/tools/k10_primer.sh  | bash /dev/stdin csi -s $storageclass -i gcr.io/kasten-images/k10tools:7.0.1
Â
Thanks
Emmanuel
Hi @EBrockmanÂ
Thanks for the replyÂ
As suggested i have ran the curl command below are the results
Â
root@cluster-poc new]# curl -s https://docs.kasten.io/tools/k10_primer.sh  | bash /dev/stdin csi -s $storageclass -i gcr.io/kasten-images/k10tools:7.0.1
Using default user ID (1000)
Namespace option not provided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
 --> Found jq
 --> Found cat
 --> Found base64
 --> Found tr
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm version (>= v3.11.0)
 --> Helm binary version meet the requirements
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10tools:7.0.1) to run test
Checking access to the Kubernetes context admin@iesp-he-pi-os-dhn-011-espoo
 --> Able to access the default Kubernetes namespace
K10 Kanister tools image
 --> Using Kanister tools image (gcr.io/kasten-images/k10tools:7.0.1) to run test
Running K10Primer Job in cluster with command
   ./k10tools primer storage check csi
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "k10primer" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "k10primer" must set securityContext.capabilities.drop=c"ALL"]), runAsNonRoot != true (pod or container "k10primer" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "k10primer" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
job.batch/k10primer created
Pod k10primer-kqb8n is in Pending phase
Pod Ready!
================================================================
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
Using "K10_PRIMER_CONFIG_YAML" env var content as config source
CSI Snapshot Walkthrough:
 Unable to find StorageClass (-i): storageclasses.storage.k8s.io "-i" not found  -  Error
Error: {"message":"Unable to find StorageClass (-i): storageclasses.storage.k8s.io \"-i\" not found","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":168,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:168"}
================================================================
serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted
k get storageclass
NAME Â Â Â Â Â Â Â Â Â PROVISIONER Â Â Â Â Â Â Â Â RECLAIMPOLICY Â VOLUMEBINDINGMODE Â ALLOWVOLUMEEXPANSION Â AGE
cinder-csi (default)  cinder.csi.openstack.org  Delete      Immediate      true          574d
cinder-nvme       cinder.csi.openstack.org  Delete      Immediate      true          566d
Droot@cluster-poc new]#
eroot@cluster-poc new]#
root@cluster-poc new]# export storageclass=cinder-csi
After setting storage class and running  again
curl -s https://docs.kasten.io/tools/k10_primer.sh  | bash /dev/stdin csi -s $storageclass -i gcr.io/kasten-images/k10tools:7.0.1
Below error is displaying
CSI Snapshot Walkthrough:
 Using annotated VolumeSnapshotClass (cinder-snapshot-class)
 Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvc7zmjz) in Namespace (default): Failed to check and update snapshot content: failed to take snapshot of the volume 8570be30-1710-4855-8510-da53240055d2: "rpc error: code = Internal desc = CreateSnapshot failed with error Bad request with: 5POST https://he-pi-os-dhn-011.nesc.gemini.net:8776/v3/d62ec5e65d7847db91499d890092bbe3/snapshots], error message: {\"badRequest\": {\"message\": \"Invalid input for field/attribute metadata. Value: {u'csi.storage.k8s.io/volumesnapshot/name': u'kubestr-snapshot-20240627051001', u'cinder.csi.openstack.org/cluster': u'kubernetes', u'csi.storage.k8s.io/volumesnapshot/namespace': u'default', u'csi.storage.k8s.io/volumesnapshotcontent/name': u'snapcontent-e0fba9bd-8f4b-4422-b1d3-e9a495738a96'}. u'cinder.csi.openstack.org/cluster', u'csi.storage.k8s.io/volumesnapshot/name', u'csi.storage.k8s.io/volumesnapshot/namespace', u'csi.storage.k8s.io/volumesnapshotcontent/name' do not match any of the regexes: '^sa-zA-Z0-9-_:. ]{1,255}$'\", \"code\": 400}}"  -  Error
Error: {"message":"Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvc7zmjz) in Namespace (default): Failed to check and update snapshot content: failed to take snapshot of the volume 8570be30-1710-4855-8510-da53240055d2: \"rpc error: code = Internal desc = CreateSnapshot failed with error Bad request with: 0POST https://he-pi-os-dhn-011.nesc.gemini.net:8776/v3/d62ec5e65d7847db91499d890092bbe3/snapshots], error message: {\\\"badRequest\\\": {\\\"message\\\": \\\"Invalid input for field/attribute metadata. Value: {u'csi.storage.k8s.io/volumesnapshot/name': u'kubestr-snapshot-20240627051001', u'cinder.csi.openstack.org/cluster': u'kubernetes', u'csi.storage.k8s.io/volumesnapshot/namespace': u'default', u'csi.storage.k8s.io/volumesnapshotcontent/name': u'snapcontent-e0fba9bd-8f4b-4422-b1d3-e9a495738a96'}. u'cinder.csi.openstack.org/cluster', u'csi.storage.k8s.io/volumesnapshot/name', u'csi.storage.k8s.io/volumesnapshot/namespace', u'csi.storage.k8s.io/volumesnapshotcontent/name' do not match any of the regexes: '^ma-zA-Z0-9-_:. ]{1,255}$'\\\", \\\"code\\\": 400}}\"","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":168,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:168"}
Still i am facing the issue. Please suggest on thisÂ
Hello @vishnuvardhan4885,
Â
So, I would recommend taking a look at the Cinder csi snapshotter logs. Primer essentially run a simple snapshot and restore. If the csi driver is having problems performing this task then thee primer will fail. You could send over the snapshotter logs and maybe we can look from there. Outside of that you would likely need to get with Cinder support on why volumesnapshotting is failing.Â
Â
Thanks
Emmanuel
Hello @vishnuvardhan4885,
Â
I would also recommend trying the Infrastructure Profile for Cinder. This would allow for K10 to use the Storage API’s to perform the task as well. https://docs.kasten.io/latest/install/storage.html#cinder-openstack
Â
Thanks
Emmanuel