Skip to main content
Solved

0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.


Good day, I tried to install K10 in my cluster and I got the following error. What could be the problem, or how can I solve it? 

 

Error Message: 
0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
 

Install Method: 
Helm

helm install k10 kasten/k10 --namespace=kasten-io \
--set ingress.create=true \
--set ingress.class=nginx \
--set ingress.urlPath="/" \
--set auth.basicAuth.enabled=true \
--set auth.basicAuth.htpasswd='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

Kubernetes Version:
v1.23.6

3 comments

Userlevel 3

Hello @Smugness2774 ,

Thank you for trying out K10!

First, Kubernetes 1.23 is not yet supported/validated by K10 (https://docs.kasten.io/latest/operating/support.html). Doesn’t mean it won’t work, just that we have not formally validated it and your mileage may vary.

Also as a side note, if you’re a commercial user, it means we won’t be providing support, so please keep that in mind.

That being said, your error looks like it comes from the lack of a valid, default, storageclass in your cluster.

Did you run the pre-flight checks before installing K10? (https://docs.kasten.io/latest/install/requirements.html#pre-flight-checks)

Can you post the output of that command?

Best regards,

Userlevel 2

We’ve solved the Problem now. The Problem appeared because of 3 Problems:

  • We had no Snapshot Controller installed on the Cluster
  • We had no Volume-Snapshot-Class installed on the Cluster
  • And we had for some reason to reinstall nfs-utilities on all worker-nodes.

 

Okay, but if I’m a commercial user, I can still get community support here and can use it until these 10 nodes? Correct? 

Userlevel 3

You can indeed get community support here on these forums, commercial use or not, that is the purpose of this community :)

I would like to draw you attention to the fact that with the release of 5.0, the “free tier” of Kasten K10 has been reduced from 10 nodes to 5 nodes.

My point above was that your configuration (using Kubernetes version 1.23) is outside of K10’s supported configuration, which means we have not formally validated K10 on this version of Kubernetes, and thus you’re more likely to run into issue no one has faced before.

Comment