if i restore snapshot into new namespace it works.
But if i delete that namespace and restore from K10/Applications/Removed then i will always get error:
cause:
cause:
cause:
cause:
cause:
message: "Specified 1 replicas and only 0 are ready: could not get
StatefulSet{Namespace: rumburak-novy, Name: my-postgresql}: client
rate limiter Wait returned an error: rate: Wait(n=1) would exceed
context deadline"
fields:
- name: statefulset
value: my-postgresql
file: kasten.io/k10/kio/kube/workload/workload.go:47
function: kasten.io/k10/kio/kube/workload.WaitForWorkloadReady
linenumber: 47
message: Statefulset not in ready state
fields:
- name: namespace
value: rumburak-novy
- name: name
value: my-postgresql
file: kasten.io/k10/kio/exec/phases/phase/restore_app.go:773
function: kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).waitForWorkload
linenumber: 773
message: Error waiting for workload to be ready
file: kasten.io/k10/kio/exec/phases/phase/restore_app.go:373
function: kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).restoreApp
linenumber: 373
message: Failed to restore workloads
file: kasten.io/k10/kio/exec/internal/runner/phase_runner.go:144
function: kasten.io/k10/kio/exec/internal/runner.(*phaseRunner).execPlannedPhase
linenumber: 144
message: Failure in planned phase
message: Job failed to be executed
What does this mean?
requested volume size 8589934592 is greater than the size 0 for the source snapshot k10-csi-snap-xvsqjrc55fx6qmdt. Volume plugin needs to handle volume expansion.
k logs csi-nfs-controller-d96ccb59c-b7cxx -n kube-system
I0223 07:49:22.750254 1 controller.go:1366] provision "rumburak-novy/data-my-postgresql-0" class "nfs-csi": started
I0223 07:49:22.759026 1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rumburak-novy", Name:"data-my-postgresql-0", UID:"c36787bf-9cef-4c50-b199-5cd9b2aeb215", APIVersion:"v1", ResourceVersion:"6262566", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rumburak-novy/data-my-postgresql-0"
W0223 07:49:22.775566 1 controller.go:1202] requested volume size 8589934592 is greater than the size 0 for the source snapshot k10-csi-snap-xvsqjrc55fx6qmdt. Volume plugin needs to handle volume expansion.
I0223 07:49:23.015094 1 controller.go:1075] Final error received, removing PVC c36787bf-9cef-4c50-b199-5cd9b2aeb215 from claims in progress
W0223 07:49:23.017682 1 controller.go:934] Retrying syncing claim "c36787bf-9cef-4c50-b199-5cd9b2aeb215", failure 6
E0223 07:49:23.017769 1 controller.go:957] error syncing claim "c36787bf-9cef-4c50-b199-5cd9b2aeb215": failed to provision volume with StorageClass "nfs-csi": rpc error: code = Internal desc = failed to copy volume for snapshot: exit status 2: tar (child): /tmp/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/pvc-f2967ba0-5664-4342-bab2-fbd3243e5011.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I0223 07:49:23.015448 1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rumburak-novy", Name:"data-my-postgresql-0", UID:"c36787bf-9cef-4c50-b199-5cd9b2aeb215", APIVersion:"v1", ResourceVersion:"6262566", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "nfs-csi": rpc error: code = Internal desc = failed to copy volume for snapshot: exit status 2: tar (child): /tmp/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/snapshot-29e6025c-b9f0-431a-8b82-76814cf3ccb5/pvc-f2967ba0-5664-4342-bab2-fbd3243e5011.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now