Hi everyone,
I'm encountering a persistent issue with backup retention while using Kasten K10 8.0.6 on a Kubernetes cluster that exports backups to a Synology NAS via NFS.
The setup is configured to automatically delete exported backups from the Synology NFS share when the retention policy is triggered and the garbage collector runs. Kasten K10 reports the retention actions as successful, and I can see the retire operations marked as completed. However, the old backup data remains on the Synology storage — this is also confirmed in the Usage & Reports > Data Usage section, where the storage consumption does not decrease as expected.
I noticed that the ephemeral pods spawned by Kasten (named data-mover-svc-xxxxx) enter an Error state instead of transitioning cleanly from Pending to Running and then Terminating, as the others do. All necessary firewall rules for NFS, DNS, and NTP traffic are correctly configured and permissions too!
Has anyone experienced similar retention issues with Synology NFS targets? Are there any best practices, configuration tweaks, or known compatibility concerns that could help ensure proper cleanup of expired backups?
Thanks in advance for your insights!
