Skip to main content
Question

Restore of removed namespace failed


if i restore snapshot into new namespace it works.

But if i delete that namespace and restore from K10/Applications/Removed then i will always get error:

 

cause:
    cause:
      cause:
        cause:
          cause:
            message: "Specified 1 replicas and only 0 are ready: could not get
              StatefulSet{Namespace: rumburak-novy, Name: my-postgresql}: client
              rate limiter Wait returned an error: rate: Wait(n=1) would exceed
              context deadline"
          fields:
            - name: statefulset
              value: my-postgresql
          file: kasten.io/k10/kio/kube/workload/workload.go:47
          function: kasten.io/k10/kio/kube/workload.WaitForWorkloadReady
          linenumber: 47
          message: Statefulset not in ready state
        fields:
          - name: namespace
            value: rumburak-novy
          - name: name
            value: my-postgresql
        file: kasten.io/k10/kio/exec/phases/phase/restore_app.go:773
        function: kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).waitForWorkload
        linenumber: 773
        message: Error waiting for workload to be ready
      file: kasten.io/k10/kio/exec/phases/phase/restore_app.go:373
      function: kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).restoreApp
      linenumber: 373
      message: Failed to restore workloads
    file: kasten.io/k10/kio/exec/internal/runner/phase_runner.go:144
    function: kasten.io/k10/kio/exec/internal/runner.(*phaseRunner).execPlannedPhase
    linenumber: 144
    message: Failure in planned phase
  message: Job failed to be executed

 

What does this mean?

requested volume size 8589934592 is greater than the size 0 for the source snapshot k10-csi-snap-xvsqjrc55fx6qmdt. Volume plugin needs to handle volume expansion.

 

k logs csi-nfs-controller-d96ccb59c-b7cxx -n kube-system

I0223 07:49:22.750254       1 controller.go:1366] provision "rumburak-novy/data-my-postgresql-0" class "nfs-csi": started
I0223 07:49:22.759026       1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rumburak-novy", Name:"data-my-postgresql-0", UID:"c36787bf-9cef-4c50-b199-5cd9b2aeb215", APIVersion:"v1", ResourceVersion:"6262566", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rumburak-novy/data-my-postgresql-0"
W0223 07:49:22.775566       1 controller.go:1202] requested volume size 8589934592 is greater than the size 0 for the source snapshot k10-csi-snap-xvsqjrc55fx6qmdt. Volume plugin needs to handle volume expansion.
I0223 07:49:23.015094       1 controller.go:1075] Final error received, removing PVC c36787bf-9cef-4c50-b199-5cd9b2aeb215 from claims in progress
W0223 07:49:23.017682       1 controller.go:934] Retrying syncing claim "c36787bf-9cef-4c50-b199-5cd9b2aeb215", failure 6
E0223 07:49:23.017769       1 controller.go:957] error syncing claim "c36787bf-9cef-4c50-b199-5cd9b2aeb215": failed to provision volume with StorageClass "nfs-csi": rpc error: code = Internal desc = failed to copy volume for snapshot: exit status 2: tar (child): /tmp/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/pvc-f2967ba0-5664-4342-bab2-fbd3243e5011.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I0223 07:49:23.015448       1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rumburak-novy", Name:"data-my-postgresql-0", UID:"c36787bf-9cef-4c50-b199-5cd9b2aeb215", APIVersion:"v1", ResourceVersion:"6262566", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "nfs-csi": rpc error: code = Internal desc = failed to copy volume for snapshot: exit status 2: tar (child): /tmp/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/pvc-f2967ba0-5664-4342-bab2-fbd3243e5011.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now

 

4 comments

  • Author
  • New Here
  • 3 comments
  • February 23, 2024

I've added "allowVolumeExpansion: true" into storage class, did everything again (with different namespace) but got different error.

cause:
    cause:
      cause:
        cause:
          cause:
            message: "Specified 1 replicas and only 0 are ready: could not get
              StatefulSet{Namespace: bramborak, Name: my-postgresql}: client
              rate limiter Wait returned an error: context deadline exceeded"
          fields:
            - name: statefulset
              value: my-postgresql
          file: kasten.io/k10/kio/kube/workload/workload.go:47
          function: kasten.io/k10/kio/kube/workload.WaitForWorkloadReady
          linenumber: 47
          message: Statefulset not in ready state
        fields:
          - name: namespace
            value: bramborak
          - name: name
            value: my-postgresql
        file: kasten.io/k10/kio/exec/phases/phase/restore_app.go:773
        function: kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).waitForWorkload
        linenumber: 773
        message: Error waiting for workload to be ready
      file: kasten.io/k10/kio/exec/phases/phase/restore_app.go:373
      function: kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).restoreApp
      linenumber: 373
      message: Failed to restore workloads
    file: kasten.io/k10/kio/exec/internal/runner/phase_runner.go:144
    function: kasten.io/k10/kio/exec/internal/runner.(*phaseRunner).execPlannedPhase
    linenumber: 144
    message: Failure in planned phase
  message: Job failed to be executed

 k logs csi-nfs-controller-d96ccb59c-b7cxx -n kube-system

I0223 10:20:09.201267       1 controller.go:1366] provision "bramborak/data-my-postgresql-0" class "nfs-csi": started
I0223 10:20:09.203737       1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"bramborak", Name:"data-my-postgresql-0", UID:"88c5b80a-4b5e-4196-936e-090169088370", APIVersion:"v1", ResourceVersion:"6280012", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "bramborak/data-my-postgresql-0"
W0223 10:20:09.228956       1 controller.go:934] Retrying syncing claim "88c5b80a-4b5e-4196-936e-090169088370", failure 25
I0223 10:20:09.229011       1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"bramborak", Name:"data-my-postgresql-0", UID:"88c5b80a-4b5e-4196-936e-090169088370", APIVersion:"v1", ResourceVersion:"6280012", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "nfs-csi": error getting handle for DataSource Type VolumeSnapshot by Name k10-csi-snap-6mlwfckcpfk2hh6k: error getting snapshot k10-csi-snap-6mlwfckcpfk2hh6k from api server: volumesnapshots.snapshot.storage.k8s.io "k10-csi-snap-6mlwfckcpfk2hh6k" not found
E0223 10:20:09.229071       1 controller.go:957] error syncing claim "88c5b80a-4b5e-4196-936e-090169088370": failed to provision volume with StorageClass "nfs-csi": error getting handle for DataSource Type VolumeSnapshot by Name k10-csi-snap-6mlwfckcpfk2hh6k: error getting snapshot k10-csi-snap-6mlwfckcpfk2hh6k from api server: volumesnapshots.snapshot.storage.k8s.io "k10-csi-snap-6mlwfckcpfk2hh6k" not found

 


  • Author
  • New Here
  • 3 comments
  • February 23, 2024

Just want you to have the latest info. I think problem is still the same. RESTORESIZE is 0

Does it mean that restore would like to restore but has no snapshot?

Caused by namespace removing?

 

k get volumesnapshotcontent

NAME                                                                         READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER           VOLUMESNAPSHOTCLASS                       VOLUMESNAPSHOT                  VOLUMESNAPSHOTNAMESPACE   AGE
k10-csi-snap-x7jfghpwrpq4gdrm-content-42f8b418-4ee1-4b61-a60b-4cdad19d6dff   true         0             Retain           nfs.csi.k8s.io   k10-clone-csi-nfs-snapclas            s   k10-csi-snap-x7jfghpwrpq4gdrm   rumburak                  122m
k10-csi-snap-jk8mmb22skg8ws4b-content-c1524607-33fd-40a7-b8c5-c153c4ca6280   true         0             Retain           nfs.csi.k8s.io   k10-clone-csi-nfs-snapclas            s   k10-csi-snap-jk8mmb22skg8ws4b   dyne                      8m20s

 


Madi.Cristil
Forum|alt.badge.img+8
  • Community Manager
  • 617 comments
  • February 28, 2024

Forum|alt.badge.img+1
  • Comes here often
  • 89 comments
  • April 9, 2024

Hello @michalek123 ,

 

Would it be possible to gather the events of the namespace of the restoring application. 

 

kubectl get ev --sort-by .metadata.creationTimestamp -n <namespace>

 

Please run the above within 30mins of the restore failing.

 

Thanks

Emmanuel


Comment