Solved

Snapshot restore to same NS on OpenEBS cStor


I'm messing with Kasten on a K3s running OpenEBS cStor.

I realized that is not possible to restore a Snapshot overriding the current running Pods whereas for a new NS is. Is this expected?

Here is the error messages:

 

MountVolume.MountDevice failed for volume "pvc-c4a1261a-3c51-4861-9985-30c65651440c" : rpc error: code = Internal desc = Volume pvc-c4a1261a-3c51-4861-9985-30c65651440c is not ready: Replicas yet to connect to controller
Unable to attach or mount volumes: unmounted volumes=[mypod], unattached volumes=[mypod kube-api-access-ghlgg]: timed out waiting for the condition

 

icon

Best answer by jaiganeshjk 13 March 2023, 12:05

View original

2 comments

Userlevel 7
Badge +7

@jaiganeshjk 

Userlevel 6
Badge +2

@zimbres Thank you for posting your question here.

During restore of an Application, K10 always deletes the PVC of the application and recreates it with the snapshot.

If the snapshot of your PVC is tied to the lifecycle of the volume, then you will see problems with restore as the snapshot would eventually be destroyed when the PVCs are deleted for restore.

In this case, it would be better to have your application exported to an external Target and use it to do an in-place restore.

Please let me know if you have more questions.

Comment