when i execute the policy trough kasten UI, i get the following error:
cause:cause:cause:fields:
- name: message
value: "could not render object reference {mongosecret}: template: config:1:14:
executing \"config\" at <.Deployment.Name>: can't evaluate
field metadata in type *param.DeploymentParams"
Actually our MongoDB is deployed as a Deployment and we have annotated the deployments manifest with this annotation:
kanister.kasten.io/blueprint: mongodb-blueprint
Why the kanister operator cannot evaluate correctly the GO template at runtime?
Best answer by jaiganeshjk
@claudionw94 , The problem here seems to be that you are still running a blueprint which just has pre-backup and post-backup hooks.
Which means that blueprint doesn’t contain a backup action and for that backup action, K10 tries to take the snapshot of the PVC that your mongoDB uses by looking at its provisioner.
By the looks of it, you seem to be using Azure-disk provisioner for the PVCs.
I have executed “kopia repository connect s3” cli command specifying the correct endpoint, AWS access key and AWS secret key etc.. and Kopia report me the follow message:
Enter password to open repository:
But i’ve never created a Kopia repository and never specified any password
K10 creates a kopia repository with a generated key(stores it in the catalog) to store all the data in the S3 endpoint.
You will not be able to access the data inside the kopia repo without that key.
You will be able to restore the data only with K10.
The only thing I can see is that it can’t get at the Mongodb secret so it would not be able to run the freeze dump scripts. That is just a quick guess though. Let’s see what Kasten support say.
The only thing I can see is that it can’t get at the Mongodb secret so it would not be able to run the freeze dump scripts. That is just a quick guess though. Let’s see what Kasten support say.
I’ve put the MongoDB secret hardcoded in the blueprint manifest and when I ran again the policy I got a
NIL pointer error in resolving GO template when we define host
I see that you are using a modified blueprint referring logical mongoDB examples in K10 docs.
And from the error I assume that you are using ‘{{ .Deployment.metadata.Name }}’ instead of '{{ .StatefulSet.Name }}' in the blueprint.
But the deploymentParams struct doesn’t have a field called metadata.
It should be ‘{{ .Deployment.Name }}’ and ‘{{ .Deployment.Namespace }}’
Since you are customising the blueprint as per your environment, always try validating if the actual value and the rendered value matches.
For example,
At `actions.backup.phases[0].objects.mongosecret.name` is rendered from '{{ .StatefulSet.Name }}' or
‘{{ .Deployment.Name }}’ in your case.
Validate the secret name and see if it is same as your deployment name.
Similarly in this line, We get the password from the secret with the key `mongodb-root-password`.
If your cluster has different names(perhaps because of a different mongodb installation), you might have to modify these values in your blueprint according to your installation.
it seems that the kanister operator doesn’t resolve correctly the GO template
I see this hostname that you are using in the mongodump command might not be available at all as the headless service or the deployment name with the index of 0 will not be there. These are just available for statefulSets.
@claudionw94 If you can share the blueprint that you are using (redact any sensitive values if you are hard coding them in your blueprint), I will take a look at it.
Also mention few details about your mongoDB installation(like installation source - If you are using any particular helm chart)
If you can share the blueprint that you are using (redact any sensitive values if you are hard coding them in your blueprint), I will take a look at it.
Also mention few details about your mongoDB installation(like installation source - If you are using any particular helm chart)
Hi Jaiganesh, many thanks for replying
I have insert the wrong snippet of code, at the beginning i mistakenly wrote {{ .Deployment.metadata.Name }} and {{ .Deployment.metadata.Namespace }} in the blueprint resource but I have already tried to insert {{ .Deployment.Name }} and {{ .Deployment.Namespace }} but i get the following error
I am not sure why {{ .Deployment.Name }} evaluates to a nil pointer. I will see if I can find anything tomorrow.
We would be creating a resource called actionSet during the backup. If possible, would you be able to get the yaml output of that.(We would be cleaning it up as soon as the backup fails. You will have to watch it to get the output)
kubectl get actionset -o yaml -w
Would you be able to try the below
'{{ index .Object.metadata.name }}' in place of {{ .Deployment.Name }}
and
'{{ .Namespace.Name }}' in place of '{{ .Deployment.Namespace }}'
So right. With the updated blueprint, the templates are rendered correctly. But the secret with the name `mongodb-logical` is not available in the namespace.
Here the name under mongosecret denotes the name of the secret and it is derived from deployments name('{{ index .Object.metadata.name }}').
I assume your deployment name is `mongodb-logical` but your secret name is something else.
Is that the name of your deployment ? and you mentioned you have annotated the deployment with `kanister.kasten.io/blueprint: mongodb-blueprint` right ?
'{{ index .Object.metadata.name }}' should render the name of the object that you have annotated with the blueprint name.
You will have to construct the blueprint in such a way that the rendered value gives the name of the secret.
So right. With the updated blueprint, the templates are rendered correctly. But the secret with the name `mongo-logical` is not available in the namespace.
Here the name under mongosecret denotes the name of the secret and it is derived from deployments name('{{ index .Object.metadata.name }}').
Kanister works at the object level. Well known objects in kanister as of today are Deployment, StatefulSet, PersistentVolumeClaim, Namespace or OpenShift's DeploymentConfig
K10 creates an actionSet with the blueprint manifest and inputs object in the actionSet.
This Object is nothing but the resource that you have annotated with the blueprint name(kanister.kasten.io/blueprint='mongodb-blueprint')
So if you have annotated the deployment in your case, '{{ index .Object.metadata.name }}' will be same as {{ .Deployment.Name }}
But it seems that there is a conflict in the object and the object being used here is namespace.
If the object referenced is `Namespace` and the deployment uses {{ .Deployment.Name }}, it would evaluate to a nil value.
You should be able to see which is the object that is used in your actionSet if you carefully look into the entire error message. It will have a mention of field called object
Since you mentioned '{{ index .Object.metadata.name }}' renders to Namespace name, my guess is that you should have annotated the Namespace with the blueprint name(kanister.kasten.io/blueprint='mongodb-blueprint') instead of deployment, or annotated both the namespace and the deployment.
There’s another thing that could cause the issue is if you have selected the
Pre and Post-Snapshot Action Hooks under the Advanced settings in the K10 UI policy form.
By default Namespace is used as an object when the hooks are run from the Policy API
Actually I have put the blueprint as Pre Snapshot hook under the Advanced settings in the K10 UI policy form and I understood that in this case the object referenced is Namespace and not Deployment (Here's why {{ index .Object.metadata.name }} is equal to Namespace name and not to Deployment name).
How I can target the Deployment through Advanced settings in the K10 UI policy form and use the {{ .Deployment.Namespace }}and {{ .Deployment.Name }} templates?
Ok, I understood. Maybe is there a way to execute blueprint within a kasten policy? I am trying to hardcode the Deployment name and namespace in the blueprint manifest so the kanister operator has not to resolve them, but in this case the kanister job pod is created in the correct namespace but it goes in error and I if I read pod logs (before it dies) I get the following error:
By reading this I think that the blueprint is correctly rendered but there is a problem with " --profile {{ toJson .Profile }}" in kando command. The Profile has to be deployed with kanctl and not with Kasten UI creation Location?
@claudionw94 As I mentioned before, hooks are meant to usually freeze and unfreeze databases/run few commands before/after the backup and after the restore.
They are not meant for copying data/doing data operations to a location profile.
Execution hooks do not require location profiles and hook Blueprint actions cannot use template parameters and helpers such as {{ .Profile.Location.Bucket }} or kando location.
If you are looking to use Application consistent backups, please have a look at this example.
BTW, you don’t have to create profiles manually using `kanctl`. K10 generates it automatically when it runs a backup with kanister action.
I’ve created a new policy on Kasten without the prehook and posthook (for the mongodb blueprint)
but I have created the blueprint and I have annotated the deployment with it, I have run the policy and this time the {{ toJson .Profile }} is correctly rendered but I’m facing an error with Kopia:
cause:cause:cause:cause:cause:
message: Cannot get tenantID from config
fields:
- name: storageType
value: AD
file: kasten.io/k10/kio/storage/azuredisk.go:37
function: kasten.io/k10/kio/storage.newAzureDisk
linenumber: 37
message: Failed to initialize storage provider
file: kasten.io/k10/kio/exec/phases/phase/data_manager.go:81
function: kasten.io/k10/kio/exec/phases/phase.(*NativeDataManager).DataManagerSetup
linenumber: 81
message: Could not get storage provider. Validate that Storage provider
credentials are configured correctly
I have executed “kopia repository connect s3” cli command specifying the correct endpoint, AWS access key and AWS secret key etc.. and Kopia report me the follow message:
Enter password to open repository:
But i’ve never created a Kopia repository and never specified any password
@claudionw94 , The problem here seems to be that you are still running a blueprint which just has pre-backup and post-backup hooks.
Which means that blueprint doesn’t contain a backup action and for that backup action, K10 tries to take the snapshot of the PVC that your mongoDB uses by looking at its provisioner.
By the looks of it, you seem to be using Azure-disk provisioner for the PVCs.
I have executed “kopia repository connect s3” cli command specifying the correct endpoint, AWS access key and AWS secret key etc.. and Kopia report me the follow message:
Enter password to open repository:
But i’ve never created a Kopia repository and never specified any password
K10 creates a kopia repository with a generated key(stores it in the catalog) to store all the data in the S3 endpoint.
You will not be able to access the data inside the kopia repo without that key.
You will be able to restore the data only with K10.
I went to make sure that the mongodump has been taking (I can’t see the dumps in the action details of my backup-policy and it seems like only the snapshots are taking and not the mongodumps.