Hello!
I am testing Kasten 6.5.12 in my EKS v1.29.1-eks-b9c9ed7 environment and I need some help:
My app has 2 PVCs: ebs-claim (ebs-sc provisioner ebs.csi.aws.com) and efs-claim (efs-sc provisioner efs.csi.aws.com)
When I exclude EFS efs-claim from the policy it runs, and it exports the backup to my S3 Bucket.
So EBS snapshot is working fine, but when it tries to backup EFS I get the following error:
Error details:
- cause:
cause:
cause:
cause:
cause:
fields:
- name: storageClassName
value: efs-sc
file: kasten.io/k10/kio/kube/volume.go:659
function: kasten.io/k10/kio/kube.CreatePVandPVCClone
linenumber: 659
message: Unsupported storageclass for pv/pvc clone
file: kasten.io/k10/kio/exec/phases/phase/data_manager.go:176
function: kasten.io/k10/kio/exec/phases/phase.(*SharedVolumeSnapshotManager).SnapshotCreate
linenumber: 176
message: Failed to clone PV/PVC
fields:
- name: pvcName
value: efs-claim
- name: namespace
value: appmasker
file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:862
function: kasten.io/k10/kio/exec/phases/backup.basicVolumeSnapshot.basicVolumeSnapshot.func1.func2
linenumber: 862
message: Error snapshotting volume
fields:
- name: appName
value: mask-frontend
- name: appType
value: statefulset
- name: namespace
value: appmasker
file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:873
function: kasten.io/k10/kio/exec/phases/backup.basicVolumeSnapshot
linenumber: 873
message: Failed to snapshot volumes
file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:392
function: kasten.io/k10/kio/exec/phases/backup.processVolumeArtifacts
linenumber: 392
message: Failed snapshots for workload
message: Job failed to be executed
I have a valid Infrastructure Profile:
EFS ACCESS: Enabled (kasten even created a backup vault in AWS Backup named k10vault)
EBS ACCESS: Enabled
EBS DIRECT: Enabled
STATUS: Valid
I have a valid Location Profile
and had no issues running curl -s https://docs.kasten.io/tools/k10_primer.sh | bash:
Running K10Primer Job in cluster with command
./k10tools primer
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Pod Ready!
================================================================
Kubernetes Version Check:
Valid kubernetes version (v1.29.1-eks-b9c9ed7) - OK
RBAC Check:
Kubernetes RBAC is enabled - OK
Aggregated Layer Check:
The Kubernetes Aggregated Layer is enabled - OK
CSI Capabilities Check:
Using CSI GroupVersion snapshot.storage.k8s.io/v1 - OK
Validating Provisioners:
ebs.csi.aws.com:
Is a CSI Provisioner - OK
Storage Classes:
ebs-sc
Valid Storage Class - OK
Volume Snapshot Classes:
ebs-snapshot-class
Has k10.kasten.io/is-snapshot-class annotation set to true - OK
Has deletionPolicy 'Delete' - OK
k10-clone-ebs-snapshot-class
efs.csi.aws.com:
Storage Classes:
efs-sc
Valid Storage Class - OK
kubernetes.io/aws-ebs:
Storage Classes:
gp2
Valid Storage Class - OK
Validate Generic Volume Snapshot:
Pod created successfully - OK
GVS Backup command executed successfully - OK
Pod deleted successfully - OK
=============================================================
In system information I have:
Storage class gp2: Valid
Storage class ebs-sc: Valid
Storage class efs-sc: Failed
PROVISIONER efs.csi.aws.com
RECLAIM POLICY Delete
VOLUME BINDING MODE Immediate
ALLOW VOLUME EXPANSION false
K10 SNAPSHOT TYPE GVS
Status failed:
failed to create pod ({"message":"failed while waiting for PVC to be ready","function":"kasten.io/k10/kio/tools/k10primer.(*gvsPodOperator).createAndWait","linenumber":157,"file":"kasten.io/k10/kio/tools/k10primer/validate_gvs.go:157","fields":e{"name":"pvc","value":"kanister-tools-zl5z2"}],"cause":{"message":"found issues creating PVC","function":"kasten.io/k10/kio/tools/k10primer.(*gvsPodOperator).waitForPVCReadyOrCheckEventIssues","linenumber":289,"file":"kasten.io/k10/kio/tools/k10primer/validate_gvs.go:289","cause":{"message":"failed to provision volume with StorageClass \"efs-sc\": rpc error: code = InvalidArgument desc = Missing provisioningMode parameter"}}})