Solved

Failed Import - No Kopia manifests found in the repository


Userlevel 2
  • Not a newbie anymore
  • 5 comments

Hey all!

I have a k10 setup and have successfully created and restored from snapshot backups in the same cluster. I have also successfully exported the restore points to the bucket, but pulling them from the new cluster I am getting this error about the kopia manifests. The s3 bucket back-end is a ceph cluster, and both clusters are connected to the same buckets. The decryption key lets me proceed with the import policy and runs it, so it seems like it can see it. I have verified that the data is being pushed to the bucket. I have also verified the backup works on the same cluster, as I “exported from restore point outside the cluster with success”  Anything to look for in specific? I have looked through the logging pod and don’t see anything obvious.

 

icon

Best answer by Hagag 21 June 2022, 11:26

View original

8 comments

Userlevel 2

Both clusters are setup the exact same and the error is persistent from backups from both clusters.

Userlevel 2
Badge +1

Hi @cheslz ,

 

The error No kopia manifests found in the repository means that K10 couldn’t find the data from the export location profile. 

 

Do you see any partial errors in Export Job in Source Cluster?

 

Next Steps:-

Can you restore to new namespace from the Export in Source Cluster and post if its successful?

 

Regards
Satish

Userlevel 2

Hi Satish,

I don’t see any partial errors in the Export Job, I will attach that here.

kind: ExportAction apiVersion: actions.kio.kasten.io/v1alpha1 metadata:   name: manualbackup-8qtls-v29lr   namespace: bitwarden   uid:    resourceVersion: ""   creationTimestamp: 2022-05-25T17:22:12Z   labels:     k10.kasten.io/appName: bitwarden     k10.kasten.io/appNamespace: bitwarden status:   state: Complete   startTime: 2022-05-25T17:22:12Z   endTime: 2022-05-25T17:25:14Z   restorePoint:     name: ""   result:     name: "" spec:   subject:     apiVersion: apps.kio.kasten.io/v1alpha1     kind: RestorePoint     name: manualbackup-8qtls     namespace: bitwarden   receiveString:  XXX   profile:     name: kasten-migration     namespace: kasten-io   migrationToken:     name: export-XXX-migration-token-XXX     namespace: kasten-io   exportData:     enabled: true

 

This is the restore job from the exported restore point on the same cluster the export was performed on.

kind: RestoreAction apiVersion: actions.kio.kasten.io/v1alpha1 metadata:   name: bitwarden-test-wl5rj   namespace: bitwarden-test   uid: xxx   resourceVersion: "xxx"   creationTimestamp: 2022-05-26T16:38:57Z   labels:     k10.kasten.io/appName: bitwarden     k10.kasten.io/appNamespace: bitwarden status:   state: Complete   startTime: 2022-05-26T16:38:57Z   endTime: 2022-05-26T16:42:33Z   restorePoint:     name: ""   result:     name: "" spec:   subject:     apiVersion: apps.kio.kasten.io/v1alpha1     kind: RestorePoint     name: manualbackup-8qtls7w5vx     namespace: bitwarden   targetNamespace: bitwarden-test   filters:     includeResources:       - name: bitwarden         version: v1         resource: namespaces       - name: bitwarden.xxx-tls         version: v1         resource: secrets       - name: default-token-xxx         version: v1         resource: secrets       - name: sh.helm.release.v1.bitwarden.v1         version: v1         resource: secrets       - name: kube-root-ca.crt         version: v1         resource: configmaps       - name: bitwarden-bitwarden-k8s         version: v1         resource: services       - name: default         version: v1         resource: serviceaccounts       - name: rook-ceph-block         group: storage.k8s.io         version: v1         resource: storageclasses       - name: bitwarden-bitwarden-k8s         group: apps         version: v1         resource: deployments       - name: bitwarden-bitwarden-k8s         version: v1         resource: persistentvolumeclaims

 

 

During the creation of the import policy, I am able to paste the config and the right bucket gets picked and “authenticated?” , so it sees the data inside the bucket.

This is the log from the failed import job .

kind: ImportAction apiVersion: actions.kio.kasten.io/v1alpha1 metadata:   name: scheduled-8zwlt   namespace: k10   uid:    resourceVersion: ""   creationTimestamp: 2022-05-27T20:31:24Z   labels:     k10.kasten.io/policyName: bitwarden-import     k10.kasten.io/policyNamespace: k10     k10.kasten.io/runActionName: policy-run-dj8dh status:   state: Failed   startTime: 2022-05-27T20:31:24Z   endTime: 2022-05-27T20:32:42Z   restorePoint:     name: ""   result:     name: ""   error:     cause: '{"cause":{"file":"kasten.io/k10/kio/collections/kopia/operations.go:144","function":"kasten.io/k10/kio/collections/kopia.GetLatestSnapshot","linenumber":144,"message":"No       kopia manifests found in the       repository"},"file":"kasten.io/k10/kio/exec/phases/phase/migrate.go:359","function":"kasten.io/k10/kio/exec/phases/phase.(*migrateReceivePhase).Run","linenumber":359,"message":"Failed       to import latest collection"}'     message: Job failed to be executed spec:   subject:     name: ""   scheduledTime: 2022-05-27T20:31:24Z   receiveString: XXX   profile:     name: kasten-migration     namespace: k10
 

The only thing different between the 2 clusters besides location and network, is the namespace, the primary cluster being kasten-io and the secondary being k10. I don’t see this being the problem but figured I would mention it before there is a question about it. 

Userlevel 2

I was able to confirm that azure storage also failed, but the same file structure was imported into the storage. Seems like the problem is in the upload, even though there are no errors about it

Userlevel 6
Badge +2

@cheslz Would you be able to confirm from where did you copy the export string (config data for import)?

Is it from the policy ?

I could see that the export you have done is a manual one from the above outputs.

The manual exports are stored in a different location and if you were using the export string from the policy, it might probably not find the manifests in the correct directory.

Would you be able to validate the same ?

Userlevel 5
Badge +2

The Root cause is that the primary cluster namespace is kasten-io and the secondary namespace is k10, which leads to different server user sets in the secondary cluster (k10-admin@migration.k10.export-1655411744558878198) because the kopia server user set depends on the k10 namespace name.

The current workaround is to change the namespace in the target cluster to kasten-io ( same as the primary cluster ) 
 

Ahmed Hagag

Userlevel 2

This is great. 

Do they need to be kasten-io , or do they just need to be the same namespaces?
Thanks,

Brandon

Userlevel 5
Badge +2

@cheslz they just need to be the same namespaces name.

Thanks

Ahmed Hagag

Comment