Hi.
So, I have got an issue with my K8 cluster where Kasten K10 has made snapshots, and then I have done an ETCD restore to a time before these were created, so the VolumeSnapshots/VolumeSnapshotsContent record are not present for Ceph RBD snapshots that exist in my RBD pool.
This leaves orphaned snapshots in my Ceph cluster that I can’t easily remove.
Does anyone have any ideas how I can find and remove these orphaned images?
I was thinking:
Within K8 Storage → Persistent Volumes, output all Persistent Volumes, grep the section:
csi:
controllerExpandSecretRef:
name: rook-csi-rbd-provisioner
namespace: rook-ceph
driver: rook-ceph.rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
name: rook-csi-rbd-node
namespace: rook-ceph
volumeAttributes:
clusterID: rook-ceph
imageFeatures: layering
imageFormat: "2"
imageName: csi-vol-cdb2ac07-6a0b-11ed-87d9-2e16f5b6210e
journalPool: replicapool
pool: replicapool
storage.kubernetes.io/csiProvisionerIdentity: 1669022460388-8081-rook-ceph.rbd.csi.ceph.com
volumeHandle: 0001-0009-rook-ceph-0000000000000001-cdb2ac07-6a0b-11ed-87d9-2e16f5b6210e
Specifically interested in: (imageName): E.G csi-vol-cdb2ac07-6a0b-11ed-87d9-2e16f5b6210e
That would be an image in Ceph → Block → Images
The end result, remove all current Kasten K10 Snapshots in K8 VolumeSnapshots/VolumeSnapshotsContent, get a list of all Ceph RBD images in the pool, get all the objects with only the imageName line, match/remove ones that do not exist in Persistent Volumes - those would be the orphaned images?
Am I correct? Is there an easier way to do this?