Hello everyone!
I am using kasten k10 v4.5.8 (with airgapped installation) on kubernetes rke v1.21.8 and I am performing some logical backups for some db (Mongo/Postgre/MsSQL).
The snapshot are exported to a S3 Compatible MinIO Object Store (version RELEASE.2022-02-18T01-50-10Z) that is installed on barematel infrastructure.
I defined some policy to backup application three times at hours and as retain I have defined 'ONE HOURLY EXPORTED SNAPSHOT' to have one local and exported restore point.
In the blueprint I have the delete phase that is
delete:
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `mongoBackup.KopiaSnapshot`
- mongoBackup
phases:
- func: KubeTask
name: deleteFromStore
args:
namespace: "{{ .Namespace.Name }}"
image: ghcr.io/kanisterio/mongodb:0.72.0
command:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='rs_backup.gz'
kopia_snap='{{ .ArtifactsIn.mongoBackup.KopiaSnapshot }}'
kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"
I left kasten doing its job and effectively although kasten executes snapshot/backup/delete actions three times in an hour I have always one local and exported restore point.

This looks great but if I go on minio server into the defined bucket I can found the object of the previous (and deleted) restore point.
I am now wondering if this is normal or if I have skip some preliminary minio server configuration
Many thanks for help!