Following up from this:
I have my backup policies to only keep 1 backup (for exporting) to S3 compatible.
I have a reasonably fresh Ceph cluster with 33 images. Earlier I noticed 35 images, some image(s) have been created by Kasten K10 that haven’t been deleted properly… (and an unexpected increase in disk space usage)
In Ceph trash I see two images that it can’t delete because of error: eerrno 39] RBD image has snapshots (error deleting image from trash)
So I run this:
for x in $(rbd list --pool ceph-blockpool); do
echo "Listing snapshots for $x:"
rbd snap ls ceph-blockpool/$x
done for x in $(rbd list --pool ceph-blockpool); do rbd snap ls ceph-blockpool/$x done
The output doesn’t show any snaps for images…
When I try and gather more info about the two images that can’t be purged:
rbd status ceph-blockpool/csi-snap-9cafd9dd-7fa2-40cc-b0fd-69c937008228
rbd: error opening image csi-snap-9cafd9dd-7fa2-40cc-b0fd-69c937008228: (2) No such file or directory
rbd status ceph-blockpool/csi-snap-4795fabd-f45c-4e6d-8ec0-53cb3283a5c3
rbd: error opening image csi-snap-4795fabd-f45c-4e6d-8ec0-53cb3283a5c3: (2) No such file or directory
I don’t understand why some extra space is used (couple of hundred GB’s), 2 images that don’t seem to exist but can’t remove them because Ceph thinks there’s snapshots - but I don’t see any snapshots in the pool from my first command. And apparently the 2 images don’t exist when I try and query it for more info or associated snapshots.