Solved

GO Template blueprint not resolved at runtime


Userlevel 3

Hello,

 

I’m trying to setup an application level backup for MongoDB using kasten/kanister integration in a k8s cluster,

i’ve created the Kanister Profile using the Kasten interface for a S3 Compatible target (minIO) ,

i’m using this YAML as a base blueprint for the backup action

 raw.githubusercontent.com/kanisterio/kanister/0.72.0/examples/stable/mongodb/blueprint-v2/mongo-blueprint.yaml

 

when i execute the policy trough kasten UI, i get the following error:

 

cause:
cause:
cause:
fields:
- name: message
value: "could not render object reference {mongosecret}: template: config:1:14:
executing \"config\" at <.Deployment.Name>: can't evaluate
field metadata in type *param.DeploymentParams"

Actually our MongoDB is deployed as a Deployment and we have annotated the deployments manifest with this annotation:

 

kanister.kasten.io/blueprint: mongodb-blueprint

Why the kanister operator cannot evaluate correctly the GO template at runtime?

icon

Best answer by jaiganeshjk 16 February 2022, 14:25

View original

18 comments

Userlevel 7
Badge +22

The only thing I can see is that it can’t get at the Mongodb secret so it would not be able to run the freeze dump scripts. That is just a quick guess though. Let’s see what Kasten support say.

Userlevel 3

The only thing I can see is that it can’t get at the Mongodb secret so it would not be able to run the freeze dump scripts. That is just a quick guess though. Let’s see what Kasten support say.

 

I’ve put the MongoDB secret hardcoded in the blueprint manifest and when I ran again the policy I got a

NIL pointer error in resolving GO template when we define host

  host='{{ .Deployment.Name }}-0.{{ .Deployment.Name }}-headless.{{ .Deployment.Namespace }}.svc.cluster.local'

it seems that the kanister operator doesn’t resolve correctly the GO template

Userlevel 6
Badge +2

I see that you are using a modified blueprint referring logical mongoDB examples in K10 docs.

And from the error I assume that you are using ‘{{ .Deployment.metadata.Name }}’ instead of '{{ .StatefulSet.Name }}' in the blueprint.

But the deploymentParams struct doesn’t have a field called metadata. 

It should be ‘{{ .Deployment.Name }}’ and  ‘{{ .Deployment.Namespace }}’

 

Since you are customising the blueprint as per your environment, always try validating if the actual value and  the rendered value matches.

For example,

At `actions.backup.phases[0].objects.mongosecret.name` is rendered from '{{ .StatefulSet.Name }}' or

‘{{ .Deployment.Name }}’ in your case.

Validate the secret name and see if it is same as your deployment name.

 

Similarly in this line, We get the password from the secret with the key `mongodb-root-password`.

If your cluster has different names(perhaps because of a different mongodb installation), you might have to modify these values in your blueprint according to your installation. 

Userlevel 6
Badge +2

NIL pointer error in resolving GO template when we define host

  host='{{ .Deployment.Name }}-0.{{ .Deployment.Name }}-headless.{{ .Deployment.Namespace }}.svc.cluster.local'

it seems that the kanister operator doesn’t resolve correctly the GO template

 

I see this hostname that you are using in the mongodump command might not be available at all as the headless service or the deployment name with the index of 0 will not be there. These are just available for statefulSets.

Userlevel 6
Badge +2

@claudionw94 If you can share the blueprint that you are using (redact any sensitive values if you are hard coding them in your blueprint), I will take a look at it.

Also mention few details about your mongoDB installation(like installation source - If you are using any particular helm chart)

Userlevel 3

If you can share the blueprint that you are using (redact any sensitive values if you are hard coding them in your blueprint), I will take a look at it.

Also mention few details about your mongoDB installation(like installation source - If you are using any particular helm chart)

Hi Jaiganesh, many thanks for replying

I have insert the wrong snippet of code, at the beginning i mistakenly wrote 
{{ .Deployment.metadata.Name }} and {{ .Deployment.metadata.Namespace }}
in the blueprint resource but I have already tried to insert {{ .Deployment.Name }} and {{ .Deployment.Namespace }}
but i get the following error

cause:
cause:
cause:
fields:
- name: message
value: 'could not render object reference {mongosecret}: template: config:1:14:
executing "config" at <.Deployment.Name>: nil pointer evaluating
*param.DeploymentParams.Name'

In the Template Parameters documentation (https://docs.kanister.io/templates.html?highlight=deploymentparams#deployment) i read this line


" For example, to access the Name of a Deployment use: "{{ index .Deployment.Name }}" "


but if I put index in the {{ }} i get the same previous error.


To be more reliable i put the Blueprint YAML manifest i am using

 

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
name: mongodb-blueprint
actions:
backup:
outputArtifacts:
mongoBackup:
# Capture the kopia snapshot information for subsequent actions
# The information includes the kopia snapshot ID which is essential for restore and delete to succeed
# `kopiaOutput` is the name provided to kando using `--output-name` flag
kopiaSnapshot: "{{ .Phases.takeConsistentBackup.Output.kopiaOutput }}"
phases:
- func: KubeTask
name: takeConsistentBackup
objects:
mongosecret:
kind: Secret
name: '{{ .Deployment.Name }}'
namespace: "{{ .Deployment.Namespace }}"
args:
namespace: "{{ .Deployment.Namespace }}"
image: ghcr.io/kanisterio/mongodb:0.72.0
command:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
host={{ .Deployment.Name }}.{{ .Deployment.Namespace }}.svc.cluster.local
dbPassword='{{ index .Phases.takeConsistentBackup.Secrets.mongosecret.Data "mongodb-root-password" | toString }}'
dump_cmd="mongodump --oplog --gzip --archive --host ${host} -u root -p ${dbPassword}"
backup_file_path='rs_backup.gz'
${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -

Thank you.

Userlevel 6
Badge +2

 

I am not sure why {{ .Deployment.Name }} evaluates to a nil pointer. I will see if I can find anything tomorrow.

We would be creating a resource called actionSet during the backup. If possible, would you be able to get the yaml output of that.(We would be cleaning it up as soon as the backup fails. You will have to watch it to get the output)

kubectl get actionset -o yaml -w

Would you be able to try the below

'{{ index .Object.metadata.name }}' in place of {{ .Deployment.Name }}

and

'{{ .Namespace.Name }}' in place of '{{ .Deployment.Namespace }}'

Userlevel 3

Hi Jaiganesh,

I edited the blueprint YAML as you said:

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
name: mongodb-blueprint
actions:
backup:
outputArtifacts:
mongoBackup:
# Capture the kopia snapshot information for subsequent actions
# The information includes the kopia snapshot ID which is essential for restore and delete to succeed
# `kopiaOutput` is the name provided to kando using `--output-name` flag
kopiaSnapshot: "{{ .Phases.takeConsistentBackup.Output.kopiaOutput }}"
phases:
- func: KubeTask
name: takeConsistentBackup
objects:
mongosecret:
kind: Secret
name: '{{ index .Object.metadata.name }}'
namespace: '{{ .Namespace.Name }}'
args:
namespace: "{{ .Deployment.Namespace }}"
image: ghcr.io/kanisterio/mongodb:0.72.0
command:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
host={{ index .Object.metadata.name }}.{{ .Namespace.Name }}.svc.cluster.local
dbPassword='{{ index .Phases.takeConsistentBackup.Secrets.mongosecret.Data "mongodb-root-password" | toString }}'
dump_cmd="mongodump --oplog --gzip --archive --host ${host} -u root -p ${dbPassword}"
backup_file_path='rs_backup.gz'
${dump_cmd} | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -

And i get the following error:

 

cause:
cause:
cause:
fields:
- name: message
value: secrets "mongodb-logical" not found

It seems that '{{ index .Object.metadata.name }}' is rendered as the deployment's namespace and not its name (the secret name is the deployment name)

Userlevel 6
Badge +2

So right. With the updated blueprint, the templates are rendered correctly. But the secret with the name `mongodb-logical` is not available in the namespace. 

Here the name under mongosecret denotes the name of the secret and it is derived from deployments name('{{ index .Object.metadata.name }}').

        mongosecret:
kind: Secret
name: '{{ index .Object.metadata.name }}'
namespace: '{{ .Namespace.Name }}'

I assume your deployment name is `mongodb-logical` but your secret name is something else.

Is that the name of your deployment ? and you mentioned you have annotated the deployment with `kanister.kasten.io/blueprint: mongodb-blueprint` right ?

'{{ index .Object.metadata.name }}' should render the name of the object that you have annotated with the blueprint name. 

You will have to construct the blueprint in such a way that the rendered value gives the name of the secret.

Userlevel 3

So right. With the updated blueprint, the templates are rendered correctly. But the secret with the name `mongo-logical` is not available in the namespace. 

Here the name under mongosecret denotes the name of the secret and it is derived from deployments name('{{ index .Object.metadata.name }}').

        mongosecret:
kind: Secret
name: '{{ index .Object.metadata.name }}'
namespace: '{{ .Namespace.Name }}'

I assume your deployment name is `mongodb-logical` but your secret name is something else.

You will have to construct the blueprint in such a way that the rendered value gives the name of the secret.

 

The deployment name in my case is mongo-logical-mongodb and the namespace is mongodb-logical (these values are taken from mongodb’s bitnami chart).

 

i’m just wondering why  '{{ index .Object.metadata.name }}'  = {{ .Deployment.Name }}

Userlevel 6
Badge +2

@claudionw94

Kanister works at the object level. Well known objects in kanister as of today are DeploymentStatefulSetPersistentVolumeClaimNamespace or OpenShift's DeploymentConfig

K10 creates an actionSet with the blueprint manifest and inputs object in the actionSet.

This Object is nothing but the resource that you have annotated with the blueprint name(kanister.kasten.io/blueprint='mongodb-blueprint')

So if you have annotated the deployment in your case, '{{ index .Object.metadata.name }}'  will be same as {{ .Deployment.Name }}

But it seems that there is a conflict in the object and the object being used here is namespace.

If the object referenced is `Namespace` and the deployment uses  {{ .Deployment.Name }}, it would evaluate to a nil value.

You should be able to see which is the object that is used in your actionSet if you carefully look into the entire error message. It will have a mention of field called object

cause: cause: cause: fields: - name: message value: 'could not render object reference {mongosecret}: template: config:1:14: executing "config" at <.Deployment.Name>: nil pointer evaluating *param.DeploymentParams.Name

 

Since you mentioned  '{{ index .Object.metadata.name }}' renders to Namespace name, my guess is that you should have annotated the Namespace with the blueprint name(kanister.kasten.io/blueprint='mongodb-blueprint') instead of deployment, or annotated both the namespace and the deployment.

 

There’s another thing that could cause the issue is if you have selected the

Pre and Post-Snapshot Action Hooks under the Advanced settings in the K10 UI policy form.

By default Namespace is used as an object when the hooks are run from the Policy API

Userlevel 3

Hi jaiganeshjk,

 

Actually I have put the blueprint as Pre Snapshot hook under the Advanced settings in the K10 UI policy form and I understood that in this case the object referenced is Namespace and not Deployment (Here's why {{ index .Object.metadata.name }} is equal to Namespace name and not to Deployment name).

 

How I can target the Deployment through Advanced settings in the K10 UI policy form and use the {{ .Deployment.Namespace }}and {{ .Deployment.Name }} templates?

Userlevel 6
Badge +2

As of today, you cannot do if from Policy API/UI. It defaults to Namespace.

https://docs.kasten.io/latest/kanister/hooks.html#kanister-execution-hooks

We had a specific use case to run post-restore hook against a particular resource. So we have added it to the UI.

But we don’t have it for pre/post backup hooks yet. It could be a good feature request. 

 

However I see this is a logical blueprint which uses mongodump to get the database backed up.

Usually hooks are used for freezing/unfreezing the database to have consistent snapshots/exports.

Is there a use case where you would need to set the backup to run as a prehook ?

You can have the action name in the blueprint to be backupPrehook and backupPosthook to achieve this.

But you will have to remove the hooks from the policy to render DeploymentParams.

 

Let me know if this answers all of your questions

Userlevel 3

Hi jaiganeshjk,

 

Ok, I understood. Maybe is there a way to execute blueprint within a kasten policy?
I am trying to hardcode the Deployment name and namespace in the blueprint manifest so the kanister operator has not to resolve them, but in this case the kanister job pod is created in the correct namespace but it goes in error and I if I read pod logs (before it dies) I get the following error:

 

time="2022-02-15T10:26:37.638419129Z" level=info msg="Kando failed to execute" File=cmd/kando/main.go Function=main.main Line=22 error="Unsupported Location type: " hostname=kanister-job-htfzr

 


By reading this I think that the blueprint is correctly rendered but there is a problem with " --profile {{ toJson .Profile }}" in kando command.
The Profile has to be deployed with kanctl and not with Kasten UI creation Location?

Userlevel 6
Badge +2

@claudionw94 As I mentioned before, hooks are meant to usually freeze and unfreeze databases/run few commands before/after the backup and after the restore.

They are not meant for copying data/doing data operations to a location profile.

Execution hooks do not require location profiles and hook Blueprint actions cannot use template parameters and helpers such as {{ .Profile.Location.Bucket }} or kando location.

 

If you are looking to use Application consistent backups, please have a look at this example.

 

BTW, you don’t have to create profiles manually using `kanctl`. K10 generates it automatically when it runs a backup with kanister action.

Userlevel 3

Hello @jaiganeshjk ,

All is clear, I have now another question.

 

I’ve created a new policy on Kasten without the prehook and posthook (for the mongodb blueprint)

but I have created the blueprint and I have annotated the deployment with it, I have run the policy and this time the {{ toJson .Profile }} is correctly rendered but I’m facing an error with Kopia:

 

 

cause:
cause:
cause:
cause:
cause:
message: Cannot get tenantID from config
fields:
- name: storageType
value: AD
file: kasten.io/k10/kio/storage/azuredisk.go:37
function: kasten.io/k10/kio/storage.newAzureDisk
linenumber: 37
message: Failed to initialize storage provider
file: kasten.io/k10/kio/exec/phases/phase/data_manager.go:81
function: kasten.io/k10/kio/exec/phases/phase.(*NativeDataManager).DataManagerSetup
linenumber: 81
message: Could not get storage provider. Validate that Storage provider
credentials are configured correctly

I have executed “kopia repository connect s3” cli command specifying the correct endpoint, AWS access key and AWS secret key etc.. and Kopia report me the follow message:

 

Enter password to open repository:

But i’ve never created a Kopia repository and never specified any password

Userlevel 6
Badge +2

@claudionw94 , The problem here seems to be that you are still running a blueprint which just has pre-backup and post-backup hooks.

Which means that blueprint doesn’t contain a backup action and for that backup action, K10 tries to take the snapshot of the PVC that your mongoDB uses by looking at its provisioner.

By the looks of it, you seem to be using Azure-disk provisioner for the PVCs.

K10 needs permissions to manage provider snapshots by using Azure APIs. Follow this docs to configure K10 with the proper Azure service Principal (https://docs.kasten.io/latest/install/azure/azure.html#installing-k10)
 

I have executed “kopia repository connect s3” cli command specifying the correct endpoint, AWS access key and AWS secret key etc.. and Kopia report me the follow message:

 

Enter password to open repository:

But i’ve never created a Kopia repository and never specified any password

K10 creates a kopia repository with a generated key(stores it in the catalog) to store all the data in the S3 endpoint.

You will not be able to access the data inside the kopia repo without that key.

You will be able to restore the data only with K10.

Hi Jaiganesh,

where can I see the step-by-step ouput of the mongodb-logical backup: https://docs.kasten.io/latest/kanister/mongodb/install_logical.html

I went to make sure that the mongodump has been taking (I can’t see the dumps in the action details of my backup-policy and it seems like only the snapshots are taking and not the mongodumps.

 

Thanks for the support,

Olidong

Comment