Dears,
I have issue for tacking snapshot backup from one of name space on k8s to NFS directory, I installed the CIS nfs drivers and pointed it in storage-class then created the pvc to crate pv but I facing the attached image and below error :
Dears,
I have issue for tacking snapshot backup from one of name space on k8s to NFS directory, I installed the CIS nfs drivers and pointed it in storage-class then created the pvc to crate pv but I facing the attached image and below error :
Hello
could you share the output of the below script in order to get more information about your SC and VSC configuration/settings.
curl https://docs.kasten.io/tools/k10_primer.sh | bash
it seems Volume Snapshots are not supported for this storage provider. as it mentioned in the error logs
you can use K10 with Kanister which gives you the ability to backup, restore, and migrate this application data.
Thanks
Ahmed Hagag
Dear Ahmed,
The below output of the command, and how I can configure the kanister with k10, is it during installation and after installation:
# curl https://docs.kasten.io/tools/k10_primer.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7045 100 7045 0 0 13732 0 --:--:-- --:--:-- --:--:-- 13706
Namespace option not provided, using default namespace
Checking for tools
--> Found kubectl
--> Found helm
Checking if the Kasten Helm repo is present
--> The Kasten Helm repo was found
Checking for required Helm version (>= v3.0.0)
--> No Tiller needed with Helm v3.7.2
K10Primer image
--> Using Image (gcr.io/kasten-images/k10tools:4.5.12) to run test
Checking access to the Kubernetes context kubernetes-admin@kubernetes
--> Able to access the default Kubernetes namespace
K10 Kanister tools image
--> Using Kanister tools image (ghcr.io/kanisterio/kanister-tools:0.75.0) to run test
Running K10Primer Job in cluster with command-
./k10tools primer
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Waiting for pod k10primer--1-mtkkr to be ready - PodInitializing
Waiting for pod k10primer--1-mtkkr to be ready - PodInitializing
Waiting for pod k10primer--1-mtkkr to be ready - PodInitializing
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
Waiting for pod k10primer--1-mtkkr to be ready -
BR,
Hello
i see the script did not complete… please follow the steps in the below link
there is also End to end example showing how to do that
https://docs.kasten.io/latest/install/generic.html?#generic-storage-backup-and-restore
Ahmed
Hello
I have the same issue, creating backup without pv and pvc it is okay, but once their have pv and pvc I get this error
Job failed to be executed
Failed to fetch the snapshot session
Volume Snapshots are not supported for this storage provider.Try K10's Generic Storage Backup method(https://docs.kasten.io/latest/install/generic.html?#generic-storage-backup-and-restore) or contact contact@kasten.io
after I am trying to Enable Kanister Sidecar Injection and my tenants minio are not stable they restart always in fews minutes, when I set it false all are normal.
Enabling Kanister Sidecar Injection adds an additional sidecar container to your deployment, which can cause additional resource usage and may lead to instability in your cluster. This could be the reason why your MinIO instances are restarting frequently when sidecar injection is enabled.
To troubleshoot this issue, you can try the following steps:
Check the logs of the MinIO pods to see if there are any errors or warnings that might indicate the cause of the instability. You can use the kubectl logs
command to view the logs of a specific pod.
Check the resource usage of your cluster to see if it is reaching its limits. You can use the kubectl top
command to view the resource usage of your pods and nodes.
Try increasing the resource limits for your MinIO deployment to see if it improves stability. You can do this by modifying the resources
section of the deployment YAML
Enabling Kanister Sidecar Injection adds an additional sidecar container to your deployment, which can cause additional resource usage and may lead to instability in your cluster. This could be the reason why your MinIO instances are restarting frequently when sidecar injection is enabled.
To troubleshoot this issue, you can try the following steps:
Check the logs of the MinIO pods to see if there are any errors or warnings that might indicate the cause of the instability. You can use the kubectl logs
command to view the logs of a specific pod.
Check the resource usage of your cluster to see if it is reaching its limits. You can use the kubectl top
command to view the resource usage of your pods and nodes.
Try increasing the resource limits for your MinIO deployment to see if it improves stability. You can do this by modifying the resources
section of the deployment YAML
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.