Hello,
We observe that, in the events of the kasten-io namespace, several PVCs are remaining in the Provisioning state but when checking the cluster, those PVCs are not provisioned.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v1/events
LAST SEEN TYPE REASON OBJECT MESSAGE
3m17s Normal Provisioning persistentvolumeclaim/kanister-pvc-15kkr External provisioner is provisioning volume for claim "kasten-io/kanister-pvc-15kkr"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once finished the snapshot, the PVC is deleted in Kubernetes OK but the orphaned subvolume remains in the cephfs storage:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-03-28T15:43:47.536161813Z I0328 15:43:47.536137 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kasten-io", Name:"kanister-pvc-15kkr", UID:"e67dfd43-b5c1-40fb-8a13-7c3923c3724c", APIVersion:"v1", ResourceVersion:"2383954963", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "ocs-storagecluster-cephfs": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-copy-6xvmkfwx: error getting snapshot snapshot-copy-6xvmkfwx from api server: volumesnapshots.snapshot.storage.k8s.io "snapshot-copy-6xvmkfwx" not found
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These events refer to PVCs that are used to take daily snapshots of the Kubernetes cluster. They are accumulated in the CEPH cluster and as a walkaround what we do is manually delete the associated subvolumes.
Can you help me to delete those PVC referents on kasten-io namespace, please?
Thank you