VeeamON 2024 - Use Code "COMMUNITY10" for 10% Off!
Hey! I have added an immutable S3 Bucket to be able to export my backups to. Everything works as expected, I can backup and restore, but I get perdiodically this error message and I don’t know why, because the data wasn’t modified by anyone (only kasten has access to it). Blobs containing immutable backup data were found to be in an unexpected state that may indicate tampering has occurred. The affected repository location is of type S3, at the endpoint https://minio.domain.tld, in the bucket "k10-minio" at the path "k10/b5c2fd3e-6d52-41b0-8675-84dc6f2acaab/migration/repo/52717ccf-8f42-4d31-bec4-c3122e5ccee5/". The profile being used to access this repository is called "k10-minio" in the "kasten-io" namespace. Issue description: one or more data blobs have unexpected version history. See logs for more details.What can I do to prevent these messages and be sure that my data is reliable? Thanks a lot!
Hey all, small question: I have seen in the release notes of v5.0 that the free starter license we are using, will be decreased from 10 to 5 nodes. Will this also affect my starter license enabled in my k10 deployment? I am afraid, that I cant backup my 6 nodes cluster after that upgrade anymore. Thanks a lot. EDIT: Seems like veeam k10 doesnt count the master nodes that cant schedule pods, is this right?
Hello! I have a big problem. We had a small network outage in our datacenter and after that I needed to restore an application. I am shocked because my backups seem to be “lost”. After a while of troubleshooting and restarting k10-services, I tried to do a restore and it is working but the export to nfs isnt working. Maybe thats the reason why k10 tells my that there are no restore points for my application. cause: cause: fields: - name: FailedSubPhases value: - Err: cause: cause: cause: cause: cause: message: command terminated with exit code 1 message: invalid repository password file: kasten.io/k10/kio/kopia/repository.go:549 function: kasten.io/k10/kio/kopia.ConnectToKopiaRepository linenumber: 549 message: Failed to connect to the backup repository
Hello! I try to backup a namespace containing a deployment with the following resources: deployments, service, configmap, secrets, pvc/pv (storageclass rook and nfs). It seems that I have problems with the nfs storageclass: If I try to backup every ressource I get an error “storageclass not supported”. Thats no problem I thought, because I tried: Excluding the pvc by name in the policy. But it got stuck in the phase “Snapshotting Application Components” Can you help me out with this, its freaking me out. Kubernetes 1.22 fresh installed K10 4.5.9
Hi!I have problems with using a blueprint to backup a mariadb deployment in kubernetes. After running trough a series of error messages I made ot to the last that did not let me successful back up the workload with a blueprint: cause: cause: fields: - name: message value: 'Failed while waiting for Pod kanister-job-t8pw4 to complete: Pod failed or did not transition into complete state: Pod kanister-job-t8pw4 failed. Pod details (&Pod{ObjectMeta:{kanister-job-t8pw4 kanister-job- APPNAME-CUSTOMER-test 754772e9-403e-447e-8c4a-d6a6f3dbe006 42480080 0 2022-04-29 09:55:17 +0000 UTC <nil> <nil> map[createdBy:kanister] map[cni.projectcalico.org/containerID:ab374f1fc96af0c13fd75443989d41e27533a3fc5cf85e8c4fee40dc8996a748 cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs:] [] [] [{Go-http-client Update v1 2022-04-29 09:55:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:l
Hello,I tried to find out how to cleanup backups taken before and I am wondering where to find this feature. Do I risk any problems if I delete the backups in my s3-location? Thanks a lot.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.