I recently created a K3S cluster with Kasten K10. Upon exposing the Kasten k10 gateway for access to the dashboard i get a 403 access denied from the gateway service.
Did you disable klipper when installing k3s? I know that I had issues since it or Traefik was using ports or something. It was a while back but after that I always disabled --disable servicelb and --disable traefik
I think you would also need to remover the traefik yaml file from the manifest folder as well so that it will not come back up after reboot.
Since they are already running you would need to stop the k3s.service then go into the /systemd/system/k3s.service and add to this line ExecStart=/usr/local/bin/k3s \
server --disable servicelb --disable traefik
Then to a systemctl deamon-reload and systemctl start k3s
See if that helps.
Again I don’t remember what my issue exactly was but do remember disabling those.
cheers
Hello @TobiASS and thank you for testing/using K10!
The externalGateway group of parameters is mostly useful for AWS deployments (on EKS for example).
In the case of k3s, you don’t need that, as k3s should come out of the box with Traefik configured as an Ingress.
Ingress is Kubernetes’s term for what the rest of the world calls an application load balancer or a reverse proxy.
in your case, try removing that —set externalGateway.create=true and replace it with --set ingress.create=true --set ingress.class=traefik
You may need additonal parameters, it will depends on how you setup your k3s clusters and Traefik deployment, but it should get you started.
Best regards,
Hello @TobiASS and thank you for testing/using K10!
The externalGateway group of parameters is mostly useful for AWS deployments (on EKS for example).
In the case of k3s, you don’t need that, as k3s should come out of the box with Traefik configured as an Ingress.
Ingress is Kubernetes’s term for what the rest of the world calls an application load balancer or a reverse proxy.
in your case, try removing that —set externalGateway.create=true and replace it with --set ingress.create=true --set ingress.class=traefik
You may need additonal parameters, it will depends on how you setup your k3s clusters and Traefik deployment, but it should get you started.
Best regards,
Thanks shuguet
I too will go back and try this again now :)
cheers
Hello @Geoff Burke and @shuguet, thank you for responding to my question. I am running K3S with the traefik and servicelb disabled, and i install Metallb and Traefik myself. Furthermore i re-created the cluster and installed K3S with:
And now the gateway pod won't even start… so i dont know what is happening
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m17s default-scheduler Successfully assigned kasten-io/gateway-8467b6cbf6-d49gd to helios Normal Pulling 5m14s kubelet Pulling image "docker.io/emissaryingress/emissary:2.2.2" Normal Pulled 4m28s kubelet Successfully pulled image "docker.io/emissaryingress/emissary:2.2.2" in 45.862592506s Normal Killing 3m45s kubelet Container ambassador failed liveness probe, will be restarted Warning Unhealthy 3m42s kubelet Readiness probe failed: Get "http://10.32.175.224:8877/ambassador/v0/check_ready": dial tcp 10.32.175.224:8877: connect: connection refused Normal Pulled 3m41s kubelet Container image "docker.io/emissaryingress/emissary:2.2.2" already present on machine Normal Created 3m40s (x2 over 4m28s) kubelet Created container ambassador Normal Started 3m40s (x2 over 4m28s) kubelet Started container ambassador Warning Unhealthy 3m (x5 over 3m51s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 503 Warning Unhealthy 9s (x31 over 3m57s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
Looks to me that there’s something going on with your cluster’s network and/or Ingress controller.
Could you do a sanity check with a simple application, like a simple web server, exposed as a service, with an Ingress in front?
K10 doesn’t do much more than that when configured for an Ingress.
If it doesn’t work, then you could start by removing the ingress config too, and just accessing the dashboard through the kubectl proxy to make sure that part works and isolate if the issue comes from K10 or your cluster config.
In my case I just used Metallb and had the gateway to true
--set externalGateway.create=true
That way the svc is created as a LoadBalancer type service and is allocated an IP by Mettalb. However, that is when you just skip using ingress all together so not always and option.
cheers
Looking back at the logs @TobiASS provided, the 403 access denied he’s getting is because the request is denied by an external authorization service (that’s the UAEX “response flag” part after the 403 response code).
So you could also go back to using your externalGateway if you’d prefer that to an Ingress, and try a different auth mechanism, like basic (to check if that UAEX goes away).
I would still recommend deploying without either of those and simply trying a port-foward access first.
If that works, then work your way “up”, either with an externalGateway (changes Service gateway-svc to type LoadBalancer) or an Ingress (ingress.create=true and ingress.class=traefik).
If either (or both) do not work, then the “simple app” strategy (leaving K10 aside for a minute) should help assess if those components are working properly in your cluster.
Hello All,
So after my last post with re-creating the cluster I don't know what I did but I am not able to get the gateway pod to run again. (also using a default K10 deployment like this: “helm install k10 kasten/k10 --namespace=kasten-io”). Looking at the logs i provided earlier the errors are the same, it boils down to the following it seems like:
I think the key element is this error is the following:
dial tcp t::1]:8004: connect: connection refused
It tries to connect to the localhost w/ ipv6 and when i exec into the pod i get the same result, though the localhost over ipv4 is reachable.
Can you try disabling ipv6 on that system?
might be something that k3s does, some kind of preference to ipv6 if available or something.
Can you try disabling ipv6 on that system?
might be something that k3s does, some kind of preference to ipv6 if available or something.
Hi @shuguet I have both completely disabled ipv6 and i have also changed my Kubernetes dual-stack install to just ipv4. Sadly yields the same result.
There are still inet6 link-local addresses in your screenshot, that’s evidence that ipv6 isn’t fully disabled on the system.
Contrary to ipv4, ipv6 link local are valid routing addresses, therefor you’d have to disable ipv6 in your host kernel I guess.
May I ask what you’re trying to achieve with that system/environment? Why disable servicelb and traefik from k3s for example?
my own k3s environment with out of the box config works flawlessly, just trying to understand the motivation behind getting away from the safe/sane defaults k3s ships with.
Hello, I am back. I figured out what was going wrong, kind off. I am back at my starting situation. Turns out removing pre-installed coredns and reïnstalling it wrong does all kinds of funky stuff to your cluster.
I am running K10 and installed it with the helm command like this:
@TobiASS quick thought: When you are using the port-forward, are you using the `/k10/` path in your url? So http://localhost:8080/k10/ ? We do not redirect automatically from / to /k10/ you need to specify the full path including both slashes, otherwise it will not work.
You can customize that path in the helm chart, but that’s not the point here.