Failed Import - No Kopia manifests found in the repository
Hey all!
I have a k10 setup and have successfully created and restored from snapshot backups in the same cluster. I have also successfully exported the restore points to the bucket, but pulling them from the new cluster I am getting this error about the kopia manifests. The s3 bucket back-end is a ceph cluster, and both clusters are connected to the same buckets. The decryption key lets me proceed with the import policy and runs it, so it seems like it can see it. I have verified that the data is being pushed to the bucket. I have also verified the backup works on the same cluster, as I “exported from restore point outside the cluster with success” Anything to look for in specific? I have looked through the logging pod and don’t see anything obvious.
Page 1 / 1
Both clusters are setup the exact same and the error is persistent from backups from both clusters.
Hi @cheslz ,
The error No kopia manifests found in the repository means that K10 couldn’t find the data from the export location profile.
Do you see any partial errors in Export Job in Source Cluster?
Next Steps:-
Can you restore to new namespace from the Export in Source Cluster and post if its successful?
Regards Satish
Hi Satish,
I don’t see any partial errors in the Export Job, I will attach that here.
During the creation of the import policy, I am able to paste the config and the right bucket gets picked and “authenticated?” , so it sees the data inside the bucket.
The only thing different between the 2 clusters besides location and network, is the namespace, the primary cluster being kasten-io and the secondary being k10. I don’t see this being the problem but figured I would mention it before there is a question about it.
I was able to confirm that azure storage also failed, but the same file structure was imported into the storage. Seems like the problem is in the upload, even though there are no errors about it
@cheslz Would you be able to confirm from where did you copy the export string (config data for import)?
Is it from the policy ?
I could see that the export you have done is a manual one from the above outputs.
The manual exports are stored in a different location and if you were using the export string from the policy, it might probably not find the manifests in the correct directory.
Would you be able to validate the same ?
The Root cause is that the primary cluster namespace is kasten-io and the secondary namespace is k10, which leads to different server user sets in the secondary cluster (k10-admin@migration.k10.export-1655411744558878198) because the kopia server user set depends on the k10 namespace name.
The current workaround is to change the namespace in the target cluster to kasten-io ( same as the primary cluster )
Ahmed Hagag
This is great.
Do they need to be kasten-io , or do they just need to be the same namespaces? Thanks,
Brandon
@cheslz they just need to be the same namespaces name.