Skip to main content

The catalog server kept on showing me errors after a storage issue yesterday and backups weren’t running anymore. I recreated the catalog DB and Kasten is working again, but I have 500GB + in snapshots that are eating away at my Ceph cluster storage and no easy way to find and delete them.

 

The snaps from Kasten should still be in volumesnapshots and volumesnapshotscontents inside K8.

Does anyone have any ideas for me?

Hi @voarsh Thanks for posting this question.

Did you have K10 DR policy enabled in your cluster ? If you had a backup of catalog, it can be easily recovered by doing a K10 DR restore(https://docs.kasten.io/latest/operating/dr.html#recovering-k10-from-a-disaster). Once restored, the retire of these snapshots will happen as per the set policy retention.

 

If you don’t have this enabled, All you can do is manually delete the volumesnapshots/volumesnapshotcontents from your kubernetes cluster.


Hi @voarsh Thanks for posting this question.

Did you have K10 DR policy enabled in your cluster ? If you had a backup of catalog, it can be easily recovered by doing a K10 DR restore(https://docs.kasten.io/latest/operating/dr.html#recovering-k10-from-a-disaster). Once restored, the retire of these snapshots will happen as per the set policy retention.

 

If you don’t have this enabled, All you can do is manually delete the volumesnapshots/volumesnapshotcontents from your kubernetes cluster.



Thanks, yup. I painfully deleted them all.


Comment