Solved

Kasten ext gateway pod pending State

  • 17 November 2021
  • 6 comments
  • 506 views

Userlevel 4
Badge +1

Hi All,

 

I installed Kasten on K3s and if i open the dashboard it is working fine, the issue is svclb-gateway-ext POD is always in pending state: 

AME                                  READY   STATUS    RESTARTS   AGE
svclb-gateway-ext-mpsb5               0/1     Pending   0          2d1h
svclb-gateway-ext-rr2xt               0/1     Pending   0          2d1h
kanister-svc-c7bc74b9d-vx266          1/1     Running   0          2d1h
frontend-svc-b58b8b657-gbfv6          1/1     Running   0          2d1h
executor-svc-54d78ddcc7-xl8kg         2/2     Running   0          2d1h
executor-svc-54d78ddcc7-c6zgf         2/2     Running   0          2d1h
auth-svc-69fdd659dc-qmrf6             1/1     Running   0          2d1h
state-svc-74b44f8b69-9j68q            1/1     Running   0          2d1h
executor-svc-54d78ddcc7-5q8j8         2/2     Running   0          2d1h
config-svc-5d79f9c786-bd9bj           1/1     Running   0          2d1h
logging-svc-59455454cf-ts7b8          1/1     Running   0          2d1h
prometheus-server-75fcc94d5c-ltt2b    2/2     Running   0          2d1h
catalog-svc-5f58597479-c6mfg          2/2     Running   0          2d1h
k10-grafana-55cf89786-cgz9n           1/1     Running   0          2d1h
crypto-svc-5585dd8884-gqqpm           2/2     Running   0          2d1h
aggregatedapis-svc-59ff7cd849-nh27m   1/1     Running   0          2d1h
jobs-svc-7f995cbb74-klvjv             1/1     Running   0          2d1h
metering-svc-687659cf94-bt4w7         1/1     Running   0          2d1h
dashboardbff-svc-86546965d9-8gxsh     1/1     Running   1          2d1h
gateway-78659b78bf-r2h2b              1/1     Running   1          2d1h
svclb-gateway-ext-5b98g               0/1     Pending   0          69s
 

if Describe the POD i got the below error:

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  38h   default-scheduler  0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 2 node(s) didn't match Pod's node affinity/selector.

please your advise of anyone face like this issue

 

BR,

Ali      

 

icon

Best answer by jaiganeshjk 17 November 2021, 11:58

View original

6 comments

Userlevel 6
Badge +2

@Aly Idriss Thanks for posting this question here.

svclb-gateway-ext-xxxxx is not a part of K10 helm chart.

 

I am not sure if there’s an external component creates those pods.

Userlevel 4
Badge +1

@jaiganeshjk, i can see a daemonset installed with kasten, 

NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-gateway-ext   3         3         0       3            0           <none>          2d2h

 

# kubectl describe daemonset.apps/svclb-gateway-ext -n kasten-io
Name:           svclb-gateway-ext
Selector:       app=svclb-gateway-ext
Node-Selector:  <none>
Labels:         objectset.rio.cattle.io/hash=a0ef3b73da0b70b99b09dd18f63ff190f89953ce
                svccontroller.k3s.cattle.io/nodeselector=false
Annotations:    deprecated.daemonset.template.generation: 1
                objectset.rio.cattle.io/applied:
                  H4sIAAAAAAAA/5xUTW/jNhD9K8WcKUWOk9gS0MMiySFo1zFsby9BEIzIkc2aIgVypI1h6L8XVLxrp81HsUfNx9Pje4/cw1ZbBQXcINXOLolBADb6L/JBOwsFYNOEs24EAmpiVMgIxR...
                objectset.rio.cattle.io/id: svccontroller
                objectset.rio.cattle.io/owner-gvk: /v1, Kind=Service
                objectset.rio.cattle.io/owner-name: gateway-ext
                objectset.rio.cattle.io/owner-namespace: kasten-io
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=svclb-gateway-ext
           svccontroller.k3s.cattle.io/svcname=gateway-ext
  Containers:
   lb-port-80:
    Image:      rancher/klipper-lb:v0.2.0
    Port:       80/TCP
    Host Port:  80/TCP
    Environment:
      SRC_PORT:    80
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IP:     10.43.15.230
    Mounts:        <none>
  Volumes:         <none>
Events:            <none>
 

Userlevel 6
Badge +2

Yea @Aly Idriss ,

Since you mentioned you are trying out K10 in K3s, It might be something that is created by Rancher for any external loadBalancer service you create.

I see that they use something called `Klipper Loadbalancer` for LoadBalancer services in K3s which uses hostPort to expose the service.

All your 3 worker nodes might have already been using port 80, and it might be the reason for the pods staying in pending state.

https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer

Userlevel 4
Badge +1

Thanks @jaiganeshjk , it is very wierd  because i installed K3s without traefik and why it is appearing under Kasten namespace not default or kube-syste.

since I installed metallb-system as external LB, do you think it is safe to remove K3s LB service.

BR,

Ali

Userlevel 6
Badge +2

Yes It is safe to remove it. You might have to disable ServiceLB along with the Traefik while installing K3s.

It seems that if you have a Service with type LoadBalancer, this serviceLB controller spins up a daemonSet for this.

https://github.com/k3s-io/k3s/issues/2323#issuecomment-701218153

Userlevel 4
Badge +1

thank you very much @jaiganeshjk 

Comment