Skip to main content

Good morning,

 

I have an issue on our testing environment where K10 discovers too many worker nodes in the cluster. It was working as expected prior the update to 6.0.7.

 

Is there a possibility to reset or fix this?

Here is the output of the nodes, as you can see we have 3 control-plane nodes an 4 worker nodes:

(⎈|oidc@play:kasten-io)]$ kubectl get nodes

NAME                        STATUS   ROLES           AGE    VERSION

k8s-play-m-0.domain   Ready    control-plane   544d   v1.24.14

k8s-play-m-1.domain   Ready    control-plane   544d   v1.24.14

k8s-play-m-2.domain   Ready    control-plane   544d   v1.24.14

k8s-play-w-0.domain   Ready    <none>          544d   v1.24.14

k8s-play-w-1.domain   Ready    <none>          544d   v1.24.14

k8s-play-w-2.domain   Ready    <none>          544d   v1.24.14

k8s-play-w-3.domain   Ready    <none>          544d   v1.24.14

@jaiganeshjk 


I believe you can restrict the nodes that K10 is used on by leveraging taints and tolerations.

Check this out here where they talk about Pinning K10 to Specific Nodes

https://docs.kasten.io/latest/install/advanced.html

 

However, I am not completely certain that this will affect the license count. Let’s see what the kasten folks say.


@Daniel Moes 

What was your old k10 version?  fine? Have there been any changes in your cluster configuration?

 


I believe you can restrict the nodes that K10 is used on by leveraging taints and tolerations.

Check this out here where they talk about Pinning K10 to Specific Nodes

https://docs.kasten.io/latest/install/advanced.html

 

However, I am not completely certain that this will affect the license count. Let’s see what the kasten folks say.

 

Hello Geoff,

It turns out, our control plane nodes were not tainted as such. I suppose this went wrong during a K8S update that was done pretty much at the same time as the Kasten update. So Kasten seems to check for the taints to identify the type of the node.

 

I fixed it by adding the taint via kubectl:

kubectl taint nodes node.domain node-role.kubernetes.io/control-plane:NoSchedule

Thank you very much for pointing me in the right direction! 😀


I believe you can restrict the nodes that K10 is used on by leveraging taints and tolerations.

Check this out here where they talk about Pinning K10 to Specific Nodes

https://docs.kasten.io/latest/install/advanced.html

 

However, I am not completely certain that this will affect the license count. Let’s see what the kasten folks say.

 

Hello Geoff,

It turns out, our control plane nodes were not tainted as such. I suppose this went wrong during a K8S update that was done pretty much at the same time as the Kasten update. So Kasten seems to check for the taints to identify the type of the node.

 

I fixed it by adding the taint via kubectl:

kubectl taint nodes node.domain node-role.kubernetes.io/control-plane:NoSchedule

Thank you very much for pointing me in the right direction! 😀

Glad to have helped. Cheers


Comment