Hi,
I am struggling with a problem.
Disk space for service Catalog is low at 21%
Disk space for service Jobs is low at 21%
Disk space for service Logging is low at 21%
I can’t, for the life of me, work out how to fix this! Nothing I have come across seems to work.
I can see this if I query the Helm chart:
bash-4.4$ helm show all kasten/k10
<snip>
persistence:
mountPath: "/mnt/k10state"
enabled: true
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: ""
accessMode: ReadWriteOnce
size: 20Gi
metering:
size: 2Gi
catalog:
size: ""
jobs:
size: ""
logging:
size: ""
grafana:
# Default value is set to 5Gi. This is the same as the default value
# from previous releases <= 4.5.1 where the Grafana sub chart used to
# reference grafana.persistence.size instead of the global values.
# Since the size remains the same across upgrades, the Grafana PVC
# is not deleted and recreated which means no Grafana data is lost
# while upgrading from <= 4.5.1
size: 5Gi
<snip>
That matches the size I see for the pod (just showing the catalog one here):
bash-4.4$ kubectl -n kasten-io exec catalog-svc-cffb6bcdf-fl4tj -- df -h
Defaulted container "catalog-svc" out of: catalog-svc, kanister-sidecar, upgrade-init (init), schema-upgrade-check (init)
Filesystem Size Used Avail Use% Mounted on
overlay 17G 13G 4.0G 76% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.9G 0 2.9G 0% /sys/fs/cgroup
/dev/vda1 17G 13G 4.0G 76% /mnt/k10state
shm 64M 0 64M 0% /dev/shm
tmpfs 5.7G 12K 5.7G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 2.9G 0 2.9G 0% /proc/asound
tmpfs 2.9G 0 2.9G 0% /proc/acpi
tmpfs 2.9G 0 2.9G 0% /proc/scsi
tmpfs 2.9G 0 2.9G 0% /sys/firmware
bash-4.4$
But if I try and change the pod size, it just doesn’t work (this is from trying an upgrade and a fresh install - same result):
bash-4.4$ helm upgrade --install k10 kasten/k10 --namespace=kasten-io --set external.Gateway.create=true --set global.persistence.size=45G --set global.persistence.catalog.size=45G
Release "k10" does not exist. Installing it now.
NAME: k10
LAST DEPLOYED: Mon Sep 25 08:40:59 2023
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten?s K10 Data Management Platform 6.0.5!
Documentation can be found at https://docs.kasten.io/.
How to access the K10 Dashboard:
To establish a connection to it use the following `kubectl` command:
`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`
The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
The pod size is still 20G.
I have tried using “45Gi”, 45Gi, “45G” and 45G, but nothing seems to change the values in Helm to anything other than 20G.
I am using a default storage claim:
bash-4.4$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate true 37d
This is K10 running on K8s on RHEL8.8.
What am I missing?
Thanks in advance for any pointers!