VeeamON 2024 - Use Code "COMMUNITY10" for 10% Off!
I keep getting the following error from K10 dashboard, it’s stopping me from restoring/ or doing anything on the dashboard:I am on 6.5.4, I have NOT updated, I’ve even reinstalled it, at the same version and it keeps popping up,
Following up from this: I have my backup policies to only keep 1 backup (for exporting) to S3 compatible.I have a reasonably fresh Ceph cluster with 33 images. Earlier I noticed 35 images, some image(s) have been created by Kasten K10 that haven’t been deleted properly… (and an unexpected increase in disk space usage)In Ceph trash I see two images that it can’t delete because of error: [errno 39] RBD image has snapshots (error deleting image from trash)So I run this:for x in $(rbd list --pool ceph-blockpool); doecho "Listing snapshots for $x:"rbd snap ls ceph-blockpool/$xdone for x in $(rbd list --pool ceph-blockpool); do rbd snap ls ceph-blockpool/$x doneThe output doesn’t show any snaps for images…When I try and gather more info about the two images that can’t be purged:rbd status ceph-blockpool/csi-snap-9cafd9dd-7fa2-40cc-b0fd-69c937008228rbd: error opening image csi-snap-9cafd9dd-7fa2-40cc-b0fd-69c937008228: (2) No such file or directoryrbd status ceph-blockpool/csi-snap-4795fabd-
Will Kasten K10 stop using SQLite?I’ve bene using Kasten K10 for over 1 year, and over that time I’ve faced many issues where a storage issue has resulted in unclean shutdowns on applications/IO - my MySQL database/PostgreSQL databases, etc have all been fine and survived, but Kasten hasn’t.Will K10 look at move away from SQLlite? Would it hurt to deploy an actual database engine?
Hi.So, I have got an issue with my K8 cluster where Kasten K10 has made snapshots, and then I have done an ETCD restore to a time before these were created, so the VolumeSnapshots/VolumeSnapshotsContent record are not present for Ceph RBD snapshots that exist in my RBD pool.This leaves orphaned snapshots in my Ceph cluster that I can’t easily remove. Does anyone have any ideas how I can find and remove these orphaned images?I was thinking:Within K8 Storage → Persistent Volumes, output all Persistent Volumes, grep the section: csi: controllerExpandSecretRef: name: rook-csi-rbd-provisioner namespace: rook-ceph driver: rook-ceph.rbd.csi.ceph.com fsType: ext4 nodeStageSecretRef: name: rook-csi-rbd-node namespace: rook-ceph volumeAttributes: clusterID: rook-ceph imageFeatures: layering imageFormat: "2" imageName: csi-vol-cdb2ac07-6a0b-11ed-87d9-2e16f5b6210e journalPool: replicapool pool: replicapool storage.kubernetes.i
I recently purged everything in my Ceph cluster, started fresh…..And I have backups going on, but I’m finding that in 2 weeks of this new install that the number of “trash” snaps are going up. It was first 6, now 20… I can’t delete them….. I haven’t done a single restore via Kasten, so I don’t understand why these images aren’t deleting (should be no volumes/image dependencies from a restore)…..Nevertheless purging gives: “RBD image has snapshots (error deleting image from trash)”My retention is basically keep 1 hourly backup, the rest are sent to external storage outside of Ceph. Does Kasten have bugs with snapshots? Before I started fresh, restoring PVC’s, making up the clones, restoring, would result in loads of unclearable snaps inside Ceph as well….
After my previous post,I deleted all my snapshots in VolumeSnashots and VolumeSnapshotContents and, after reinstalling Kasten, running k10_primer, I find that my basic PVC backups aren’t working with the following error:“Failed to find included PVCs”K10 version: 5.0.8Status:[{"name":"admin","description":"Admin Service","passed":true},{"name":"auth","description":"Auth Service","passed":true},{"name":"bloblifecyclemanager","description":"Bloblifecyclemanager Service","passed":true},{"name":"catalog","description":"Catalog Service","passed":true},{"name":"controllermanager","description":"Controllermanager Service","passed":true},{"name":"crypto","description":"Crypto Service","passed":true},{"name":"dashboardbff","description":"Dashboardbff Service","passed":true},{"name":"events","description":"Events Service","passed":true},{"name":"executor","description":"Executor Service","passed":true},{"name":"jobs","description":"Jobs Service","passed":true},{"name":"logging","description":"Log
The catalog server kept on showing me errors after a storage issue yesterday and backups weren’t running anymore. I recreated the catalog DB and Kasten is working again, but I have 500GB + in snapshots that are eating away at my Ceph cluster storage and no easy way to find and delete them. The snaps from Kasten should still be in volumesnapshots and volumesnapshotscontents inside K8.Does anyone have any ideas for me?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.