CephFS backups onto S3 with Kasten are almost impossible for me currently as the volume generation from the snapshot takes much longer than the configured timeout limits for even a medium amount of data (50GB) and uses a lot of resources.
I would except shallow read-only snapshots to solve this, but it seems that Kasten still uses the old way. From what I gathered from the documentation, as long as the snapshot volume is mounted read-only, ceph-csi should use shallow snapshots.
Is my configuration incorrect or has Kasten not yet implemented this?
Thank you very much,
Pascal
Page 1 / 1
@pascalzero Thank you for posting this question.
As you mentioned, CephFS takes a lot of time to be restore from a snapshot and we have seen this happening a lot causing failures to export.
We could help you in tweaking the timeout for the wait period in this case.
However, I will try to go through the ceph-csi docs for 3.7 release and this shallow read-only volumes.
Do you know if this feature require any changes in accessModes for the PVC that is created with the volumesnapshot as datasource ?
Currently K10 uses the accessMode of the original PVC for the temporary PVCs created during exports. If there is a change in the Spec that is required to utilise the shallow readonly clone, then we will have to file a feature request.
Hi @jaiganeshjk , thanks for the quick response.
I believe this can be a real game changer for CephFS backups then. We would not want to change timeouts for now as this also puts quite a bit of load on the system, so a proper solution is definitely required.
We have reverted to other software/scripts for the time being, but it would be ideal of course if Kasten handled this nicely.
As I understand the spec, if you just use readonly access for the temporary PVC, that should already result in shallow clone being used, as per this item from the design doc:
Volume source is a snapshot, volume access mode is *_READER_ONLY.
I'd be really happy to see this happen any time soon - if there is a way for me to support (e.g. through testing), please let me know.
Cheers,
Pascal
Thanks Pascal. Reading about this feature leads me to believe that we can get it working with K10 out of the box.
I will do further reading and some tests to get this working and keep you posted on the same.
You got me excited Good luck!
@pascalzero I was going through through the testing and found that it cannot work with K10 out of the box.
Unfortunately, this feature needs the PVCs to be created with the accessMode set to `ROX`(which is the only supported accessMode for snapshot-backed volumes).
However, K10 uses the accessMode for the temporary PVC from the original PVCs manifest.
We don’t have a way to override this as of today.
I will be opening a feature request to support this and keep you informed once we have this supported in the product.
Hi @jaiganeshjk ,
Thanks a lot for the investigation and getting back on this.
That’s somewhat what I expected, but crossing my fingers now that it might land soon as it’s surely a vital feature for a lot of users once Ceph CSI 3.7 adoption has spread a bit further.
Again if there’s anything I can do to help testing, let me know.
Hi @jaiganeshjk ,
Just wanted to check in if you have any insights into the release planning and whether this has can be plotted on the timeline, yet?
We are still struggling with the issue every night when backups are running as storage load goes up so bad due to the CephFS copying that it impacts the overall system stability.
Thanks a lot!
Quick update. With the release of ceph-csi 3.8 today the new shallow snapshots will be used by default as long as the access mode is ROX. Kasten seems to use RWO however.
I have checked the Kanister source code, but the snapshot mounting for s3 upload seems to happen in the closed Kasten code base?
Would be great to have some work looked into this
Thank you for your interest. We are tracking this internally.
However, we don’t have any timelines yet.
@pascalzero You are right. As I mentioned earlier, We reuse the accessMode of the PVC from the original PVC that is being exported.
We would update you once we have the ability in the product.
Thank you for your interest. We are tracking this internally.
However, we don’t have any timelines yet.
@pascalzero You are right. As I mentioned earlier, We reuse the accessMode of the PVC from the original PVC that is being exported.
We would update you once we have the ability in the product.
Was hoping after 4 months I’d hear something else about this issue.
Just to give you an idea of how this affects me, I cannot backup CephFS volumes with K10, because I have hundreds and hundreds of GB’s that must be copied every time K10 makes a backup with a snapshot, not only is it crazy IO expensive, but it times out and the jobs fail because of x3 retires (that I can’t increase).
For the most part RSYNC works (for my other workloads that use CephFS), but I am deploying Bitbucket and it doesn’t reserve file permissions, so I must find another way to backup (if the file permissions aren’t retained it breaks my backup), so I can’t use RSYNC for backing up this application, and I can’t use K10… :(
Hi,
we are facing same issue, while backup jobs start it takes too long to create clone and it fills cepfs pool.
so is there any solution for this issue?
or is there any road map to solve this problem?
We are already working on supporting shallow clone for cephFS. I don't have a definite timeline for this.
But you can expect it soon.
@voarsh@Laksoy The only workaround that I have for now is to increase the timeout that K10 waits for the Pod to be ready so that it doesn’t timeout within the clone creation time.
Currently, the timeout is set for 15 mins and you can increase it to a value by looking at how much time it takes for your largest volume to get cloned.
This way, you can ensure that the backups are there and there are no stale clones left in the ceph Filesystem.
You will need to use the helm value --set kanister.podReadyWaitTimeout=<timeout_value_in_minutes> to upgrade your k10 release.
I know that this is just a temporary workaround to make it work with cephFS . I will be able to update this thread once we have support for the shallow clone in CephFS.
Glad that you’re at least looking into the issue. I will investigate your workaround for now.
There was another question I had about CephFS clones as-is, it creates a linked clone within the CephFS filesystem, when it is deleted either by retention policy or manually, does it actually delete the snapshot within CephFS? Trying to clear up old CephFS snapshots for volumes within the Ceph Admin UI is atrociously painful and slow by hand, and the Ceph tooling around managing CephFS snapshots is sorely lacking - that’s another reason I don’t like playing with CephFS snapshots…….
Ceph RBD snapshots are a little more transparent about this, and know that Kasten K10 definitely deletes snapshots (although I’ve mentioned numerous times about orphaned images, etc that don’t get deleted and hang around).
Hi all,
Just to keep you all informed that much awaited support for shallow read-only volume snapshots for cephFS during export operations is available in k10 from version 6.5.2
Just to keep you all informed that much awaited support for shallow read-only volume snapshots for cephFS during export operations is available in k10 from version 6.5.2
Can someone explain if I need to edit the storageclass/VolumeSnapshotClass with “backingSnapshot: "true" and what I’m doing with exporterStorageClassName?
----- EDIT:
A further quick look, seems i need to edit the backup plan policy to include these overrides:
Clone my cephfs storageclass and have a parameter with backingSnapshot: "true"
….
On the right track?
With the cloned storageclass, extra parameter with backingSnapshot “true”, and the policy edit:
The PVC is never cloned/provisioned…. :/
@voarsh Thanks for your comment.
Using this feature requires a special StorageClass, which is usually a copy of the regular StorageClass of the CephFS CSI driver, but with the backingSnapshot: "true" option in the parameters section
You will need to create a new storageClass with the parameter backingSnapshot set to true. Here’s the example from the CephFS github that shows how to add `backingSnapshot` parameter.
In order to use shallow copy, PVCs that you create with the volumesnapshot as a dataSource needs to use this storageClass.
In case of K10 exports, There is a way to override which storageClass to use when doing a clone, that’s where the exporterStorageClassName comes in to picture. This override resides in the policy CR of k10.
You will have to specify the storageClass name that has backingSnapshot set to true in place of exporterStorageClassName.
Let me know if it makes sense.
Basically, You don’t change the existing storageClass that you use(as restores from local volumesnapshot will fail when you do that).
Instead, create a new storageClass with backingSnapshot parameter set to true and use the override in the policy.
Below note is important as well as you will need to preserve SELinux options while using CephFS shallow volume copy for export. This annotation should be added to the original storageClass.
Additionally, in the case of SELinux usage, it may be necessary to preserve SELinuxOptions of the original Pod into the Kanister Pod during the Export phase.
Basically, You don’t change the existing storageClass that you use(as restores from local volumesnapshot will fail when you do that).
Instead, create a new storageClass with backingSnapshot parameter set to true and use the override in the policy.
Below note is important as well as you will need to preserve SELinux options while using CephFS shallow volume copy for export. This annotation should be added to the original storageClass.
Additionally, in the case of SELinux usage, it may be necessary to preserve SELinuxOptions of the original Pod into the Kanister Pod during the Export phase.
W0207 08:53:01.656594 1 controller.go:1165] requested volume size 214748364800 is greater than the size 0 for the source snapshot snapshot-copy-24qlrtcw. Volume plugin needs to handle volume expansion.
W0207 08:53:01.656699 1 controller.go:1165] requested volume size 536870912000 is greater than the size 0 for the source snapshot snapshot-copy-pdz9wv9p. Volume plugin needs to handle volume expansion.
W0207 08:53:01.658867 1 controller.go:1165] requested volume size 107374182400 is greater than the size 0 for the source snapshot snapshot-copy-4mdkmbsv. Volume plugin needs to handle volume expansion.
E0207 08:53:31.996223 1 controller.go:957] error syncing claim "fa273554-fdc9-4cc8-9aaf-2f8ab960cf64": failed to provision volume with StorageClass "ceph-filesystem": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-copy-4hznvvz6: error getting snapshot snapshot-copy-4hznvvz6 from api server: volumesnapshots.snapshot.storage.k8s.io "snapshot-copy-4hznvvz6" not found
I0207 08:53:31.996265 1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kasten-io", Name:"kanister-pvc-tsxbr", UID:"fa273554-fdc9-4cc8-9aaf-2f8ab960cf64", APIVersion:"v1", ResourceVersion:"427807277", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "ceph-filesystem": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-copy-4hznvvz6: error getting snapshot snapshot-copy-4hznvvz6 from api server: volumesnapshots.snapshot.storage.k8s.io "snapshot-copy-4hznvvz6" not found
I0207 08:53:52.691074 1 controller.go:1359] provision "kasten-io/kanister-pvc-jr9vs" class "ceph-filesystem": started
When the export backup finally appeared to be using the shallow clone storage class I got:
failed to provision volume with StorageClass "shallow-cephfs-csi-storage-class": rpc error: code = InvalidArgument desc = cannot set pool for snapshot-backed volume
So I am cloning the storageclass without a pool? O.o
Will try and recreate without mentioning a pool………...
--- Edit:
After cloning the storageclass without a pool as per https://github.com/ceph/ceph-csi/issues/3820, annotated the original storageclass - kubectl annotate storageclass ceph-filesystem k10.kasten.io/sc-preserve-selinux-options="true", new cloned storage class with new params backingSnapshot: "true"...
Now the export of the snapshot appears to have the PVC in the kasten-io namespace. Will update when/if it copies the data to external storage location.
Seems to have been fixed in 3.10 version of ceph CSI
Hello, I am able to take backup for cephfs pvc using shallow ReadOnly Volumes. However, the restore of PVC is failing with below error in the events
Generated from Kanister Controller Failed to execute phase: v1alpha1.Phase{Name:"restoreFromServer", State:"pending", Output:map[string]interface {}(nil), Progress:v1alpha1.PhaseProgress{ProgressPercent:"", SizeUploadedB:0, EstimatedUploadSizeB:0, EstimatedTimeSeconds:0, LastTransitionTime:<nil>}}: {"message":"Failed to restore backup from Kopia API server","function":"kasten.io/k10/kio/kanister/function.restoreDataFromServer.restoreDataFromServerPodFunc.func3","linenumber":367,"file":"kasten.io/k10/kio/kanister/function/restore_data_from_server.go:367","cause":{"message":"context deadline exceeded"}}
Error in kasten dashboard:-
errors: - cause: '{"cause":{"cause":{"cause":{"cause":{"cause":{"cause":{"cause":{"message":"{\"message\":\"Failed to restore backup from Kopia API server\",\"function\":\"kasten.io/k10/kio/kanister/function.restoreDataFromServer.restoreDataFromServerPodFunc.func3\",\"linenumber\":367,\"file\":\"kasten.io/k10/kio/kanister/function/restore_data_from_server.go:367\",\"cause\":{\"message\":\"context deadline exceeded\"}}"},"fields"::{"name":"actionSet","value":{"metadata":{"creationTimestamp":"2024-08-30T14:16:00Z","generateName":"k10-restorefromserver-k10-deployment-generic-volume-2.0.43-k10restore-bd9794b9-14aa-4b88-9564-52b978deeb06-kasten-io-pvc-","generation":7,"labels":{"kanister.io/JobID":"470316c3-66da-11ef-9e00-0a580a800394"},"managedFields"::{"apiVersion":"cr.kanister.io/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:kanister.io/JobID":{}}},"f:spec":{".":{},"f:actions":{}}},"manager":"executor-server","operation":"Update","time":"2024-08-30T14:16:00Z"},{"apiVersion":"cr.kanister.io/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{".":{},"f:actions":{},"f:error":{".":{},"f:message":{}},"f:progress":{".":{},"f:lastTransitionTime":{},"f:percentCompleted":{}},"f:state":{}}},"manager":"controller","operation":"Update","time":"2024-08-30T14:26:04Z"}],"name":"k10-restorefromserver-k10-deployment-generic-volume-2.0.43gcvmh","namespace":"kasten-io","resourceVersion":"189031871","uid":"b9232ace-c3fc-4a1d-a236-67db1f7f1c22"},"spec":{"actions"::{"artifacts":{"snapshot":{"keyValue":{"backupIdentifier":"b1129c4f3d494ae89af1e48da9075a53","backupPath":"/mnt/vol_data/kanister-pvc","funcVersion":"v1.0.0-alpha","objectStorePath":"repo/e6332964-a3a9-4ee9-8094-96a8ed23f44a/","phySize":"7.3 GB","size":"15 GB"}}},"blueprint":"k10-deployment-generic-volume-2.0.43","name":"restoreFromServer","object":{"apiVersion":"","group":"","kind":"pvc","name":"k10restore-bd9794b9-14aa-4b88-9564-52b978deeb06","namespace":"kasten-io","resource":""},"options":{"hostName":"e6332964-a3a9-4ee9-8094-96a8ed23f44a.absence-fs-api-absence.absence-fs-helm-absence-pvc","pointInTimeConnection":"\"0001-01-01T00:00:00.000Z\"","serverAddress":"https://172.30.64.176:51515","userName":"k10-admin"},"podOverride":{"containers"::{"name":"container","resources":{},"volumeMounts"::{"mountPath":"/etc/ssl/certs/custom-ca-bundle.pem","name":"custom-ca-bundle-store","subPath":"custom-ca-bundle.pem"}]}],"securityContext":{"runAsNonRoot":false,"runAsUser":0},"tolerations"::{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300},{"effect":"NoSchedule","key":"node.kubernetes.io/memory-pressure","operator":"Exists"}],"volumes"::{"configMap":{"defaultMode":420,"name":"custom-ca-bundle-store"},"name":"custom-ca-bundle-store"}]},"preferredVersion":"v1.0.0-alpha","secrets":{"certs":{"apiVersion":"","group":"","kind":"secret","name":"kopia-tls-cert","namespace":"kasten-io","resource":""},"serverPassphraseKey":{"apiVersion":"","group":"","kind":"secret","name":"data-mover-server-passphrase-hfltq","namespace":"kasten-io","resource":""},"userPassphraseKey":{"apiVersion":"","group":"","kind":"secret","name":"data-mover-user-passphrase-jvvhx","namespace":"kasten-io","resource":""}}}]},"status":{"actions"::{"blueprint":"k10-deployment-generic-volume-2.0.43","deferPhase":{"name":"","progress":{},"state":""},"name":"restoreFromServer","object":{"apiVersion":"","group":"","kind":"pvc","name":"k10restore-bd9794b9-14aa-4b88-9564-52b978deeb06","namespace":"kasten-io","resource":""},"phases"::{"name":"restoreFromServer","progress":{"lastTransitionTime":"2024-08-30T14:16:10Z","progressPercent":"100"},"state":"failed"}]}],"error":{"message":"{\"message\":\"Failed to restore backup from Kopia API server\",\"function\":\"kasten.io/k10/kio/kanister/function.restoreDataFromServer.restoreDataFromServerPodFunc.func3\",\"linenumber\":367,\"file\":\"kasten.io/k10/kio/kanister/function/restore_data_from_server.go:367\",\"cause\":{\"message\":\"context deadline exceeded\"}}"},"progress":{"lastTransitionTime":"2024-08-30T14:16:10Z","percentCompleted":"100"},"state":"failed"}}}],"file":"kasten.io/k10/kio/kanister/operation.go:167","function":"kasten.io/k10/kio/kanister.(*Operation).WaitForActionSet","linenumber":167,"message":"ActionSet Failed"},"file":"kasten.io/k10/kio/exec/phases/phase/restore_app.go:2348","function":"kasten.io/k10/kio/exec/phases/phase.GenericVolumeSnapshotRestore","linenumber":2348,"message":"Failed to execute action set"},"fields"::{"name":"k8sType","value":"deployment"},{"name":"pvcName","value":"absence-fs-helm-absence-pvc"}],"file":"kasten.io/k10/kio/exec/phases/phase/restore_app.go:2091","function":"kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).restoreDataIntoPVC","linenumber":2091,"message":"Failed to restore PVC"},"file":"kasten.io/k10/kio/exec/phases/phase/restore_app.go:1816","function":"kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).restoreDataIntoPVCs","linenumber":1816,"message":"Failed to restore some of the generic volume snapshots"},"file":"kasten.io/k10/kio/exec/phases/phase/restore_app.go:567","function":"kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).createPVCsFromPVCSpecs","linenumber":567,"message":"Failed to perform Generic Volume Snapshot Restore"},"file":"kasten.io/k10/kio/exec/phases/phase/restore_app.go:363","function":"kasten.io/k10/kio/exec/phases/phase.(*restoreApplicationPhase).restoreApp","linenumber":363,"message":"Failed to create PVCs from PVC specs"},"file":"kasten.io/k10/kio/exec/internal/runner/phase_runner.go:144","function":"kasten.io/k10/kio/exec/internal/runner.(*phaseRunner).execPlannedPhase","linenumber":144,"message":"Failure in planned phase"}'
Hello, I am able to take backup but restore of PVC is failing
Generated from Kanister Controller
Failed to execute phase: v1alpha1.Phase{Name:"restoreFromServer", State:"pending", Output:mappstring]interface {}(nil), Progress:v1alpha1.PhaseProgress{ProgressPercent:"", SizeUploadedB:0, EstimatedUploadSizeB:0, EstimatedTimeSeconds:0, LastTransitionTime:<nil>}}: {"message":"Failed to restore backup from Kopia API server","function":"kasten.io/k10/kio/kanister/function.restoreDataFromServer.restoreDataFromServerPodFunc.func3","linenumber":367,"file":"kasten.io/k10/kio/kanister/function/restore_data_from_server.go:367","cause":{"message":"context deadline exceeded"}}
Hello,
After the kasten upgrade from 7.0.6 to 7.0.8, the backup policy for cephfs pvcs is failing during export phase with error