Good morning,
I have an issue on our testing environment where K10 discovers too many worker nodes in the cluster. It was working as expected prior the update to 6.0.7.
Is there a possibility to reset or fix this?
Here is the output of the nodes, as you can see we have 3 control-plane nodes an 4 worker nodes:
(⎈|oidc@play:kasten-io)]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-play-m-0.domain Ready control-plane 544d v1.24.14
k8s-play-m-1.domain Ready control-plane 544d v1.24.14
k8s-play-m-2.domain Ready control-plane 544d v1.24.14
k8s-play-w-0.domain Ready <none> 544d v1.24.14
k8s-play-w-1.domain Ready <none> 544d v1.24.14
k8s-play-w-2.domain Ready <none> 544d v1.24.14
k8s-play-w-3.domain Ready <none> 544d v1.24.14