Skip to main content

Hi all,

Can anyone explain why the restore time from k10 is greater than kanister when restore the same data about 10G.

The three blue lines in the screenshot are to set injectKanisterSidecar.enabled=true then use k10 to backup the mysql application with different data and restore it. Each backup and restore are run three times and record the time of backup and restore.

The the green lines in the screenshot are to set injectKanisterSidecar.enabled=false then use k10 to backup and restore the mysql application.

Comparing those two situations, it is found that the time restore 20G data,is difference about 10 times。

What causes such a big difference? and wher are the differences?

 

 

Does k10 use kopia for backup and restore?


Hey, 

Firstly the naming of this helm chart value is not helpful and I am going to feed this back to the engineering team. 

injectKanisterSidecar.enabled = Enable Kanister sidecar injection for workload pods

By default this helm chart value is false. 

What this value does is enable the Kasten K10 generic volume snapshot capability

 

Applications can often be deployed using non-shared storage (e.g., local SSDs) or on systems where K10 does not currently support the underlying storage provider. To protect data in these scenarios, K10 with Kanister gives you the ability, with extremely minor application modifications to add functionality to backup, restore, and migrate this application data in an efficient and transparent manner.

 

In short this is not Kanister vs Kasten, Kanister is specifically used to capture application consistent copies of your data services within your cluster it will not though capture the build up of the whole application and orchestrate that element of backup. K10 leverages Kanister to provide that consistency as well as provide this GVS functionality. GVS however is not something we would ever advocate out in the field, we would always suggest having a CSI backed storage layer which we can then leverage more efficient ways of protecting the workload. Meaning your table of performance is expected. 

 

Let me know if this helps though and if you have any further questions

 


Does k10 use kopia for backup and restore?

K10 uses Kopia as a data mover which implicitly provides support to deduplicate, encrypt and compress data at rest


@michaelcade thanks for your reply. I want to know the detail backup and restore principle of K10, the purpose is to know more where the time is spent during the backup and restore. Could you provide me some guidance on this?

In addition, what is the default CPU and memory request resources of Kopia? I understand that the data compression an deduplication will consume a lot of CPU and memory.

 
 

I would be more than happy to have a chat with you in this regard to understand what you actually need from this. As you can imagine there are lots of intricacies hidden away by K10 to simplify the orchestration of your backup and recovery tasks within Kubernetes. 

We don’t go into the architecture detail specifics around Kopia, Kopia is a standalone open source project that we use within Kasten K10 to move data from A to B, our K10 system requirements are detailed here.

 

 


  @michaelcade Thanks. 

Judging from the restore time of three different data(2G, 10G, 20G), the trend of increasing time is obvisous, and the gap is getting bigger and bigger. I want to know what is the sign of the end of restore? whether wait all pvc data recovered or check the mysql application status is running? if check the status of mysql application, I don’t think it will take that much time.

 

 


Comment