Background:
We have a multi-cluster setup with Openshift installed on On-Prem, AWS and Azure. On-Prem is designated as Primary and AWS, Azure clusters are discovered as Secondary in the Kasten K10 Multicluster UI. A S3 bucket is used for global location profile. We have created global snapshot and restore policy. We have annotated K8 supported CSI driver for Kasten volumesnapshot.
We can take snapshot (on an application) from On-Prem cluster and restore it to AWS cluster using the same CSI driver (persistent volume created on same storage). Thus our goal achieved.
Concern:
We see from our storage console, that Kasten restore operation created a clone volume. In earlier version of Kasten (v5.5), we saw that this clone volume was mounted as persistent volume to the restored application. Now in v6.0.3, we see that clone volume was there but not mounted. We see a new volume is created and mounted as persistent volume to the restored application. However all the data from primary cluster is available on the new volume (restored application).
We would like to know:
- why Kasten created a new volume but not using(mounting) the clone volume (for restored application) as in earlier version (v5.5).
- Whether the application data was there in S3 bucket and copied to the new volume from the bucket during restoration.
Thanks in Advanced.