Hello!
I have a big problem. We had a small network outage in our datacenter and after that I needed to restore an application. I am shocked because my backups seem to be “lost”. After a while of troubleshooting and restarting k10-services, I tried to do a restore and it is working but the export to nfs isnt working. Maybe thats the reason why k10 tells my that there are no restore points for my application.
cause:
cause:
fields:
- name: FailedSubPhases
value:
- Err:
cause:
cause:
cause:
cause:
cause:
message: command terminated with exit code 1
message: invalid repository password
file: kasten.io/k10/kio/kopia/repository.go:549
function: kasten.io/k10/kio/kopia.ConnectToKopiaRepository
linenumber: 549
message: Failed to connect to the backup repository
fields:
- name: appNamespace
value: <APPNAME>-test
file: kasten.io/k10/kio/exec/phases/phase/export.go:216
function: kasten.io/k10/kio/exec/phases/phase.prepareKopiaRepoIfExportingData
linenumber: 216
message: Failed to create Kopia repository for data export
file: kasten.io/k10/kio/exec/phases/phase/export.go:138
function: kasten.io/k10/kio/exec/phases/phase.(*exportRestorePointPhase).Run
linenumber: 138
message: Failed to copy artifacts
fields: []
message: Job failed to be executed
ID: d5e372eb-db46-11ec-ba36-aae5192d65c4
Phase: Exporting RestorePoint
file: kasten.io/k10/kio/exec/phases/phase/queue_and_wait_children.go:185
function: kasten.io/k10/kio/exec/phases/phase.(*queueAndWaitChildrenPhase).processGroup
linenumber: 185
message: Failure in exporting restorepoint
fields:
- name: manifestID
value: a7222bc2-db46-11ec-ba36-aae5192d65c4
- name: jobID
value: a7297043-db46-11ec-a99e-e65a53c09c98
- name: groupIndex
value: 0
file: kasten.io/k10/kio/exec/phases/phase/queue_and_wait_children.go:87
function: kasten.io/k10/kio/exec/phases/phase.(*queueAndWaitChildrenPhase).Run
linenumber: 87
message: Failed checking jobs in group
message: Job failed to be executed
fields: []
- Why dont I see my restore points anymore? There should be at least one restore point as volumesnapshot remaining in the cluster (I can see them with kubectl get volumesnapshot --all-namespaces)
- Why did the password change to my repository on nfs? Can I restore that?
Thanks a lot, let me know if you need more information
Infos:
- Kubernetes: 1.21
- K10: 4.5.14