Skip to main content

I noticed my API server is seeing high CPU usage.

kube-system            kube-apiserver-nodeX                          1260m        1041Mi

Every 1-2s I see this in my logs.

I0825 00:33:20.731070 1 trace.go:219] Trace01244752237]: "List" accept:application/json,audit-id:x,client:x.x.x.x,protocol:HTTP/2.0,resource:profiles,scope:namespace,url:/apis/cr.kanister.io/v1alpha1/namespaces/kasten-io/profiles,user-agent:aggregatedapis-server/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (25-Aug-2023 00:33:19.431) (total time: 1299ms):

Tracee1244752237]: >"List(recursive=true) etcd3" audit-id:x,key:/cr.kanister.io/profiles/kasten-io,resourceVersion:,resourceVersionMatch:,limit:0,continue: 1299ms (00:33:19.431)]

Trace31244752237]: ---"Writing http response done" count:4313 268ms (00:33:20.730)

Trace:1244752237]: >1.299384717s] 51.299384717s] END

 

If I stop K10’s `aggregatedapis-svc`, the log spam and CPU usage drops significantly.

kube-apiserver-nodeX                         198m         1049Mi

Why is K10 hitting the API server so frequently and spinning an entire CPU core?

It appears that Kasten is continually polling the API server to see if new profiles are created.

I don’t understand why: (1) this is done so frequently (2) polling was used instead of a watch.

This is clogging up my API server queue to the point that other services are being denied and affecting the stability of the entire cluster because the request queue is full.


Anyone else seeing these repeated policy checks?

Kasten team, any thoughts?


We are observing the same issue and are interested in a solution.


Is there any update on this? We experienced high CPU throttling even though we assigned 8 cores. We raised the limit to 16 cores now.


Comment