VeeamON 2024 - Use Code "COMMUNITY10" for 10% Off!
There is no error message, just a failed policy run state.
Seeing this as well. 50+ dangling `backup-data-stats-*` pods, many over 9d old.I deleted the pods but did so once before and they started accumulating again.Is it safe to delete the ‘k10-content-store-passphrase-*’ secrets as well?
Hi @Satish,Will the team take a look at making that “Monitoring Actions” timeout configurable or tied to `backupTimeout` so we can run jobs longer than 10 hours? What’s the best way to do a multiple TB backup over an intermittent connection? Any chance checking the “Ignore Exceptions and Continue if Possible” setting allows K10 to leverage previously sent blocks from a failed run?
@Satish, thanks for responding. I’m not able to run the helm commands because I installed from the helm template using kustomize. I don’t think they would be that helpful though because I used the default values and only customized as follows. valuesInline: global: persistence: storageClass: nvme auth: ... externalGateway: create: true kanister: backupTimeout: 7200There’s too much info in the logs for me to share, and also nothing that indicates anything other than the task aborting after 10 hours. It does appear to be a hard coded limit.Seems like that monitoring task should share the `backupTimeout` setting, else long running jobs will always fail at 10 hours.Also, if subsequent runs of a job really don’t make use of the blobs that were previously uploaded, then K10 seems a non-viable backup solution for large datasets to the cloud. With multiple TBs to seed over a 10M link, it’s nearly impossible to maintain a stable connection for so long. I hope
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.