Skip to main content

Hello,

I am trying to migrate a statefulset of Cassandra from one cluster to another.  I am running into issues that I believe are more related to the statefulset applications than Kasten but, could be wrong.

One key aspect is the namespace and K8s cluster domain change on the destination side.  I think this is where having different NS and domains between clusters can be problematic.  Since the application uses DNS to find pods & services, this breaks things when the migration occurs.  So when the restored pod starts up, it is looking for the old DNS name of the pod/svc.  It seems the only real way to modify a statefulset is to perform an update (rolling, canary, etc...) or delete, modify, redeploy.

 

So I tried (on the destination system) to restore only the PV, modify the statefulset fchange NS, PVC template, and value for seeds DNS], then deploy the app.

  • Pods attached to the existing PV
    • But then Cassandra could not find any seed nodes and decided to reinitialize the DB (as brand new)

So I am trying to find out if anyone has done this in the past or if possible via Kasten that I am missing.

 

 

 

Application:

  • statefulset
    • Cassandra
      • Image: 3.11.13
      • nodes: 5
      • storage: RBD on Rook-Ceph

Environment:

  • K8 on-prem
    • source: v1.21.1
    • destination: v1.24.2
    • K10 v5.0.4 (both)
    • Shared S3 bucket between both K10 just for migrations

So after digging deep into the weeds on this, there does not seem to be a way to migrate this Cassandra as is within the confines of K8s.

You can migrate via the traditional Cassandra method; same as barremetel/VM migrations.

If the Cassandra cluster was built with the multi-cluster headless service (KEP-1645) and statefulsets slice (KEP-3335), then your chances are a lot better dealing with this particular situation.


Hello,

I am trying to migrate a statefulset of Cassandra from one cluster to another.  I am running into issues that I believe are more related to the statefulset applications than Kasten but, could be wrong.

One key aspect is the namespace and K8s cluster domain change on the destination side.  I think this is where having different NS and domains between clusters can be problematic.  Since the application uses DNS to find pods & services, this breaks things when the migration occurs.  So when the restored pod starts up, it is looking for the old DNS name of the pod/svc.  It seems the only real way to modify a statefulset is to perform an update (rolling, canary, etc...) or delete, modify, redeploy.

 

So I tried (on the destination system) to restore only the PV, modify the statefulset fchange NS, PVC template, and value for seeds DNS], then deploy the app.

  • Pods attached to the existing PV
    • But then Cassandra could not find any seed nodes and decided to reinitialize the DB (as brand new)

So I am trying to find out if anyone has done this in the past or if possible via Kasten that I am missing.

 

 

 

Application:

  • statefulset
    • Cassandra
      • Image: 3.11.13
      • nodes: 5
      • storage: RBD on Rook-Ceph

Environment:

  • K8 on-prem
    • source: v1.21.1
    • destination: v1.24.2
    • K10 v5.0.4 (both)
    • Shared S3 bucket between both K10 just for migrations

Check this out, it might be very helpful.

http://migrate.yongkang.cloud 

http://eksdr.yongkang.cloud 


Comment