Question

Failed to fetch the snapshot session


Userlevel 2

Dears,

I have issue for tacking snapshot backup from one of name space on k8s to NFS directory, I installed the CIS nfs drivers and pointed it in storage-class then created the pvc to crate pv but  I facing the attached image and below error :

ERROR MESSAGES:
Failed to fetch the snapshot session
Volume Snapshots are not supported for this storage provider.Try K10's Generic Storage Backup method(https://docs.kasten.io/latest/install/generic.html?#generic-storage-backup-and-restore)
 
Is there any missing configuration for CSI /storage class/ pvc ?!
 
the nfs info:
server 10.10.x.x
path : /export/data   >» with permission 777
 
BR,
 

 


6 comments

Userlevel 5
Badge +2

Hello @alaaeldin 

 

could you share the output of the below script in order to get more information about your SC and VSC configuration/settings.

curl https://docs.kasten.io/tools/k10_primer.sh | bash

it seems Volume Snapshots are not supported for this storage provider. as it mentioned in the error logs

you can  use K10 with Kanister  which gives you the ability to backup, restore, and migrate this application data.

 

Thanks

Ahmed Hagag

Userlevel 2

Dear Ahmed,

The below output of the command, and how I can configure the kanister with  k10, is it during installation and after installation:

# curl https://docs.kasten.io/tools/k10_primer.sh | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7045  100  7045    0     0  13732      0 --:--:-- --:--:-- --:--:-- 13706
Namespace option not provided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm version (>= v3.0.0)
 --> No Tiller needed with Helm v3.7.2
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10tools:4.5.12) to run test
Checking access to the Kubernetes context kubernetes-admin@kubernetes
 --> Able to access the default Kubernetes namespace
K10 Kanister tools image
 --> Using Kanister tools image (ghcr.io/kanisterio/kanister-tools:0.75.0) to run test

Running K10Primer Job in cluster with command-
     ./k10tools primer
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Waiting for pod k10primer--1-mtkkr to be ready - PodInitializing
Waiting for pod k10primer--1-mtkkr to be ready - PodInitializing
Waiting for pod k10primer--1-mtkkr to be ready - PodInitializing
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -

 

 

BR,

Userlevel 5
Badge +2

Hello @alaaeldin 

i see the script did not complete… please follow the steps in the below link

there is also End to end example showing how to do that

https://docs.kasten.io/latest/install/generic.html?#generic-storage-backup-and-restore

 

 

Ahmed

Hello @Hagag !

I have the same issue, creating  backup without pv and pvc it is okay, but once their have pv and pvc I get this error

 Job failed to be executed

 Failed to fetch the snapshot session

 Volume Snapshots are not supported for this storage provider.Try K10's Generic Storage Backup method(https://docs.kasten.io/latest/install/generic.html?#generic-storage-backup-and-restore) or contact contact@kasten.io

 

after I am trying to Enable Kanister Sidecar Injection and my tenants minio are not stable they restart always in fews minutes, when I set it false all are normal. 

Userlevel 5
Badge +2

@mcoul 

Enabling Kanister Sidecar Injection adds an additional sidecar container to your deployment, which can cause additional resource usage and may lead to instability in your cluster. This could be the reason why your MinIO instances are restarting frequently when sidecar injection is enabled.

To troubleshoot this issue, you can try the following steps:

  1. Check the logs of the MinIO pods to see if there are any errors or warnings that might indicate the cause of the instability. You can use the kubectl logs command to view the logs of a specific pod.

  2. Check the resource usage of your cluster to see if it is reaching its limits. You can use the kubectl top command to view the resource usage of your pods and nodes.

  3. Try increasing the resource limits for your MinIO deployment to see if it improves stability. You can do this by modifying the resources section of the deployment YAML

@Hagag I tried these and still same.

Comment