Question

Removing restore point didn't remove snapshots

  • 15 November 2022
  • 9 comments
  • 276 views

Userlevel 3

I removed all restore points for an application/namespace, yet when I look at the `VolumeSnapshot` CRDs the snapshots remain, as do the backing store representations.

Shouldn’t removing a restore point remove the related snapshots?


9 comments

Userlevel 6
Badge +2

@Aaron Oneal Thanks for posting the question.

By removing the restorepoint, do you mean removing the restorepoint resource from the cluster using kubectl or deleting the restorepoint from kasten UI.

 

These are two different operations. 

There is a 1:1 binding relationship between two different k8s resources restorepoints(namespace scoped) and restorepointcontents(cluster scoped)

 

Deleting just the restorepoint will not retire the snapshot artifact. You will have to delete the corresponding restorepointcontent to spawn a retireAction toretire the artifacts(Deleting the RP from the UI does this in the background).

This is an expected behaviour is to avoid situations where deleting the namespace could affect the restorepoints.

 

If you are already deleting the restorepointcontents, and you still see the volumesnapshots left over(This won’t remove the restorepointcontents), you can take a look at the status of the retireactions that get created and see if it is failing.

Let me know if that is the case.

Userlevel 3

I used `kubectl` to remove `restorepointcontents`. I saw the retire actions spawn and complete. However, the `volumesnapshots` remain and I’m having to delete them all manually.

Userlevel 6
Badge +2

This seems to be a problem in that case.

As soon as the restorepointcontent is deleted, the retireaction actually deletes the volumesnapshot resource and the action gets successful only when the snapshot resource is removed.

Can you confirm what is the storage provisioner that you are using ?

What is the deletion policy specified in the volumesnapshot class(that the volumesnapshot uses)?

Userlevel 3

Provisioner is Ceph RBD and policy is Delete.

Same problem here using OpenStack. Did you solved this problem? @Aaron Oneal 

Userlevel 3

I had to manually remove the restore points and ultimately move to a different backup solution.

Userlevel 3
Badge +1

@Aaron Oneal  This sounds like a issue with removing the volumesnapshots during the retireaction sadly we will need K10 logs for more details on this issue as to why this is occuring. 

@Aaron Oneal which Solution you use now? K10 dont seems to be very stable in various things for me. 

Userlevel 3

Velero, though I decided to go with its Kopia snapshotting instead of CSI / filesystem.

Comment