Solved

Catalog database corrupt - recreated and lost all records of my Ceph snapshots

  • 12 September 2022
  • 2 comments
  • 52 views

Userlevel 3

The catalog server kept on showing me errors after a storage issue yesterday and backups weren’t running anymore. I recreated the catalog DB and Kasten is working again, but I have 500GB + in snapshots that are eating away at my Ceph cluster storage and no easy way to find and delete them.

 

The snaps from Kasten should still be in volumesnapshots and volumesnapshotscontents inside K8.

Does anyone have any ideas for me?

icon

Best answer by jaiganeshjk 12 September 2022, 08:27

View original

2 comments

Userlevel 6
Badge +2

Hi @voarsh Thanks for posting this question.

Did you have K10 DR policy enabled in your cluster ? If you had a backup of catalog, it can be easily recovered by doing a K10 DR restore(https://docs.kasten.io/latest/operating/dr.html#recovering-k10-from-a-disaster). Once restored, the retire of these snapshots will happen as per the set policy retention.

 

If you don’t have this enabled, All you can do is manually delete the volumesnapshots/volumesnapshotcontents from your kubernetes cluster.

Userlevel 3

Hi @voarsh Thanks for posting this question.

Did you have K10 DR policy enabled in your cluster ? If you had a backup of catalog, it can be easily recovered by doing a K10 DR restore(https://docs.kasten.io/latest/operating/dr.html#recovering-k10-from-a-disaster). Once restored, the retire of these snapshots will happen as per the set policy retention.

 

If you don’t have this enabled, All you can do is manually delete the volumesnapshots/volumesnapshotcontents from your kubernetes cluster.



Thanks, yup. I painfully deleted them all.

Comment