Solved

Kanister-sidecar resources definition


Userlevel 3

Hi.
Doesn anyone know how using Kasten Helm chart (4.5.9) to influence/change default kanister-sidecar behavior?
So far I wasn’t able to deploy it as sidecar in any namespace with limits set (without patching deployments ot statefullsets).
Kanister-sidecar attaches properly to any deployment in namespace without limits, and I am unable to find an easy way to to change its resources.
Did a little bit tweaking of prod-spec-override, but it seems it’s for different purposes
 

icon

Best answer by jaiganeshjk 18 February 2022, 08:39

View original

11 comments

Userlevel 6
Badge +2

@marcinbojko , You can use the helm value genericVolumeSnapshot.resources.[requests|limits].[cpu|memory] to set the default resources requests/limits for the kanister-sidecar containers.

For example, if you want to set the CPU and memory requests/limits to 1 core and 1Gi respectively,

use the below upgrade command to do the same.

helm get values k10 --output yaml --namespace=kasten-io > k10_val.yaml && \
helm upgrade k10 kasten/k10 --namespace=kasten-io -f k10_val.yaml \
--set genericVolumeSnapshot.resources.requests.cpu=1 \
--set genericVolumeSnapshot.resources.requests.memory=1 \
--set genericVolumeSnapshot.resources.limits.cpu=1 \
--set genericVolumeSnapshot.resources.limits.memory=1

However, this won’t have any effect on already injected `kanister-sidecar` containers. You might have to uninject them to get them updated.

We have an in-house tool k10tools that can be used to easily un-inject all the sidecars.

Userlevel 3

Thank you @jaiganeshjk will do a hot-tryout in a sec

Userlevel 3



First try after patching Kasten and redeploying demo deploys
Tha@jaiganeshjk - I’d strongly suggest to make more comments in charts about this value correlation with sidecars as it’s not clear.

Userlevel 3

Now, we dig deeper. Kanister job pod also has no limits set

```

Failed to create pod: Failed to create pod. Namespace: elasticsearch, NameFmt: kanister-job-: pods "kanister-job-7c22n" is forbidden: failed quota: default-4srzr: must specify limits.cpu,limits.memor

```

Userlevel 6
Badge +2

Yea I assume this kanister-job-* pod is from a blueprint based backup. right ?

You might have to use pod Overrides for this use case I believe.

Because if you have an admission controller that forbids pods without resource limits, Kanister pod Override is the way to go for dynamically created pods (by kanister).

 

Userlevel 3

I think so. But - sorry for that - docs for override are quite lazy and will require reverse engineering in case of kasten-job construction. Is there any real-world-working example?
Currently experimenting with overrides requires me to rebuild almost whole kasten-job anew, which may hold until - let’s say - next upgrade.
Don’t want to stress out, but unability to set resources for anything related with Kasten eliminates it from 99% of cases ;)

Userlevel 6
Badge +2

@marcinbojko

Why do you think you will lose it after the upgrades ?

Would you be able to let me know how you are configuring the podOverrides ?

 

IMO, You won’t lose it unless the configMap is deleted manually.

Userlevel 3

@jaiganeshjk I had to prepare almost complete replacement for kasten-job, including name, image - aka variables which can change after chart/application upgrade.
I am unable to prepare configMap which be merged successfuly using only partial data (for exemple skip name, skip image etc)
I was hoping for:
 

 apiVersion: v1
data:
override: |
kind: Pod
spec:
containers:
- name: kanister-job
resources:
requests:
cpu: "100m"
memory: "512Mi"
limits:
cpu: "200m"
memory: "1Gi"
kind: ConfigMap
metadata:
name: pod-spec-override
namespace: kasten-io

 

Userlevel 6
Badge +2

@marcinbojko 

The container name is mandatory field in the override and the container name should match the actual pods container name.

In case of K10, all the container in pods created by kanister has a default name `container`.

Can you try the below ConfigMap ? It should work.

 apiVersion: v1
data:
override: |
kind: Pod
spec:
containers:
- name: container
resources:
requests:
cpu: "100m"
memory: "512Mi"
limits:
cpu: "200m"
memory: "1Gi"
kind: ConfigMap
metadata:
name: pod-spec-override
namespace: kasten-io

 

Userlevel 3

Thanks @jaiganeshjk - THAT information is very crucial and should be in docs!
Seems that at least this time pods are creating (still failing though) but at least one step closer
 

Userlevel 3

And debugged to the end - we need extra work to make elasticdump in a blueprint working with basicauth.
Thanks, anything above is working ;)

Comment