I have a policy that creates a snapshot of an application and exports it, for example, to an S3 bucket. Now, if I delete the local snapshots or they are no longer available for any reason, how can I restore from the S3 bucket? Kasten then says "No restore points." But it's only the local snapshots that are missing. Restoring from the S3 bucket should still be possible, right? Import doesn't work because it's the same cluster.
Hey there. You will need to use DR restore. Provided you have valid restore points of your DR backups. DR backups will protect against local failures. Read more about it here: https://docs.kasten.io/latest/operating/dr.html
Hey David, thanks for your response. When I enable and configure Kasten DR (s3 destination) and trigger backups, I can restore them using "Restore Kasten." But here again, the same issue: as soon as I delete the local snapshot, in this case "kasten-io-scheduled-zr5qb," it can no longer find any restore points and so there ist no possibility to Use “Restore Kasten”.
This may be a case of retention policy misconfiguration. Double-check your retention policy for the exported snapshots in the policy.
Then make sure you are exporting snapshot data and not just references.
Are you referring to the retention policy within Veeam for the export? However, with Kasten DR, I can't configure anything in that regard, and I still have this issue here.
I checked the retention policy and it’s exporting the “snapshot data”. I can see that the data arrives in the S3 bucket and remains there. I'm wondering why Kasten does not search the S3 bucket and thus cannot find the snapshots again.
Can you perform a restore as a test before deleting the local snapshot? Here’s the corresponding doc to help guide you: https://docs.kasten.io/latest/usage/restore.html#restoring-applications
Restoring from both local and S3 backups works without any issues. However, if I delete the local snapshots, no restore points remain available for recovery.
Let’s try this, instead of manually deleting the local snapshot, let’s use the lifecycle rules of the policy to see if this is still an issue.
Set your snapshot policy to take hourly snapshots and retain 3 hourly snapshots.
Set the retention of exported snapshots to custom and retain 6 hourly snapshots.
Let the policy run for 6 hours, then try a test restore. You should then see 3 local restore points, and 3 additional exported restore points.
Don’t manually delete anything. Let the lifecycle rules control what gets deleted and when.
Thanks for your help! If I set the retention policy the way you suggest, then it works as it should. I can see backups that are local and on S3, and backups that are only on S3. OK, in general, it works as it should. I just want to prepare for the worst-case scenario, like if the restorepointcontents are accidentally deleted. I don’t understand why Veeam Kasten requires these for the backups stored on S3. These should be accessible independently of what happens within the Kubernetes cluster. So, I still wonder how I can restore these backups from S3 if something goes wrong in the cluster and the mapping to the S3 backup is lost.
I believe Veeam is linking metadata during the manual deletion. Therefore, when you delete it locally, then it signals a delete call on the S3 bucket. This does take some time, so that’s likely why you still see it in the bucket.
If your Kasten instance fails, you can redeploy it and link the new instance back to the S3 bucket where the backups were stored. Then you can log into Kasten and initiate the “Restore Kasten” function in settings.
You will need the Cluster ID of the original cluster where Kasten DR was enabled, and you will need the passphrase you set up for that original instance.
From that point, you can then restore exported snapshots as per usual.
You can read about that process here: https://docs.kasten.io/latest/operating/dr.html#recovering-veeam-kasten-from-a-disaster-via-ui
Comment
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.