Question

Resizing question.

  • 25 September 2023
  • 4 comments
  • 65 views

Hi,

I am struggling with a problem. 

Disk space for service Catalog is low at 21%

Disk space for service Jobs is low at 21%

Disk space for service Logging is low at 21%

I can’t, for the life of me, work out how to fix this! Nothing I have come across seems to work.

I can see this if I query the Helm chart:

bash-4.4$ helm show all kasten/k10

<snip>

  persistence:
    mountPath: "/mnt/k10state"
    enabled: true
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""
    accessMode: ReadWriteOnce
    size: 20Gi
    metering:
      size: 2Gi
    catalog:
      size: ""
    jobs:
      size: ""
    logging:
      size: ""
    grafana:
      # Default value is set to 5Gi. This is the same as the default value
      # from previous releases <= 4.5.1 where the Grafana sub chart used to
      # reference grafana.persistence.size instead of the global values.
      # Since the size remains the same across upgrades, the Grafana PVC
      # is not deleted and recreated which means no Grafana data is lost
      # while upgrading from <= 4.5.1
      size: 5Gi

<snip>

That matches the size I see for the pod (just showing the catalog one here):

bash-4.4$ kubectl -n kasten-io exec catalog-svc-cffb6bcdf-fl4tj -- df -h
Defaulted container "catalog-svc" out of: catalog-svc, kanister-sidecar, upgrade-init (init), schema-upgrade-check (init)
Filesystem      Size  Used Avail Use% Mounted on
overlay          17G   13G  4.0G  76% /
tmpfs            64M     0   64M   0% /dev
tmpfs           2.9G     0  2.9G   0% /sys/fs/cgroup
/dev/vda1        17G   13G  4.0G  76% /mnt/k10state
shm              64M     0   64M   0% /dev/shm
tmpfs           5.7G   12K  5.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           2.9G     0  2.9G   0% /proc/asound
tmpfs           2.9G     0  2.9G   0% /proc/acpi
tmpfs           2.9G     0  2.9G   0% /proc/scsi
tmpfs           2.9G     0  2.9G   0% /sys/firmware
bash-4.4$

But if I try and change the pod size, it just  doesn’t work (this is from trying an upgrade and a fresh install - same result):

bash-4.4$ helm upgrade --install k10 kasten/k10 --namespace=kasten-io --set external.Gateway.create=true --set global.persistence.size=45G --set global.persistence.catalog.size=45G
Release "k10" does not exist. Installing it now.
NAME: k10
LAST DEPLOYED: Mon Sep 25 08:40:59 2023
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten?s K10 Data Management Platform 6.0.5!

Documentation can be found at https://docs.kasten.io/.

How to access the K10 Dashboard:

To establish a connection to it use the following `kubectl` command:

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`

The pod size is still 20G.

I have tried using “45Gi”, 45Gi, “45G” and 45G, but nothing seems to change the values in Helm to anything other than 20G. 

I am using a default storage claim:

bash-4.4$ kubectl get sc
NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           true                   37d
 

This is K10 running on K8s on RHEL8.8.

What am I missing?

Thanks in advance for any pointers!


4 comments

To add:

I also tried:

helm upgrade k10 kasten/k10 --namespace 'kasten-io' --recreate-pods --set-string "global.persistence.size=42Gi" --set-string "global.persistence.catalog.size=42Gi"

and also by supplying the changes in a YAML file:

helm install k10 kasten/k10 --namespace=kasten-io --recreate-pods -f helm_updated.yaml

YAML file content:

global:
  persistence:
    storageClass: ""
    accessMode: ReadWriteOnce
    size: 50Gi
    metering:
      size: 2Gi
    catalog:
      size: ""
    jobs:
      size: ""
    logging:
      size: ""
    grafana:
 

Still doesn’t work.

Userlevel 7
Badge +17

Hi @Madi.Cristil and @safiya ...I know this post is old, and not sure if @ColinT still is looking for assistance? But this appears to be a Kasten post. I’m thinking it may be best to place in that location/Group? Thoughts?

Thanks.

Userlevel 7
Badge +7

@FRubens 

Userlevel 4
Badge +2

 Hello @ColinT 

Thank you for using our K10 community!

Regarding the helm parameters:

global.persistence.size - It is used to define the global size of all K10 volumes normally used when installing K10

global.persistence.catalog.size - It is used specifically to increase the catalog PVC size, this value can't be decreased.

The correct helm parameter would be global.persistence.catalog.size, and also your storage has to support volume extension.

From the first output I see that you run a helm upgrade together with --install, and it was installing instead of upgrading since the K10 was not found, but I do not know which error message you have while upgrading.

I would recommend first to check you actual helm values for K10, running the command below inside kasten-io namespace also sending to a file:

helm get values k10 -n kasten-io > k10_val.yaml

After that you can run an upgrade using this file increasing the catalog volume size to i.e. 50Gi:

helm upgrade k10 kasten/k10 --namespace=kasten-io -f k10_val.yaml \
--set-string "global.persistence.catalog.size=50Gi" --version=6.0.5

In the command above I am using K10 version 6.0.5 since I see from your post that this is your actual version but please change it according with your needs.

Check the output of the helm command and make sure the operation completed with success, after verify if all K10 pods are up and running and confirm the size of the catalog PVC.

kubectl get pvc -n kasten-io

Hope it helps.

Rubens

Comment