Skip to main content

I’m trying to backup Openshift Cluster through kasten but backup doesn’t happen and I’m confused what to do next.

 

u&DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-07-03 08:28:50 +0000 UTC,LastTransitionTime:2022-07-03 08:28:50 +0000 UTC,} &DeploymentCondition{Type:Progressing,Status:False,Reason:ProgressDeadlineExceeded,Message:ReplicaSet "catalog-svc-5f994799b9" has timed out progressing.,LastUpdateTime:2022-07-03 21:05:58 +0000 UTC,LastTransitionTime:2022-07-03 21:05:58 +0000 UTC,}]

 

Hello, i’m sure you will have more answer about your topic in Veeam User Group Kasten K10 Support.

How did you deploy Kasten? Admin rights? Do you have the good version of kasten for you openshift version?

Do you have more logs about your backup ou logs pod (oc get logs)?


I have deployed it through Helm command and Kasten version is 5.0.2 and Openshift version is 4.4.6.

Yes I used admin id for installation.

yes I have the logs


Have you checked this article about debug?

https://docs.kasten.io/latest/operating/support.html#gathering-debugging-information

For me you errors seems to be an error about kasten, i will try do an redeployment.

Do you have the rights to write on nfs? Do you have opening ports? what kind of application are you trying to backup? Storage class?


yes I followed the above logs and generated the logs. Yes ports are opened. it’s a custom application built on Openshift. Storage class is nfs


[root@bastion ~]# oc get pods
NAME                                     READY   STATUS    RESTARTS   AGE
aggregatedapis-svc-586578bb5c-rvb65      1/1     Running   0          2d
auth-svc-5588654786-9fwq4                1/1     Running   0          2d
catalog-svc-5f994799b9-x2gsd             0/2     Pending   0          37h
controllermanager-svc-684855bc88-fk5dk   1/1     Running   1          2d
crypto-svc-54bc67455f-6wzk8              3/3     Running   1          2d
dashboardbff-svc-5b8c6c765d-kjwmt        1/1     Running   1          2d2h
executor-svc-6bff56b888-5hmjz            2/2     Running   1          2d2h
executor-svc-6bff56b888-lx87h            2/2     Running   0          2d2h
executor-svc-6bff56b888-z69sx            2/2     Running   1          2d
frontend-svc-55677dc74d-2z7j6            1/1     Running   0          2d
gateway-7899889467-7xptt                 1/1     Running   1          2d
jobs-svc-78df978655-tvr7s                0/1     Pending   0          2d2h
k10-grafana-856c5b7c67-2w845             0/1     Pending   0          2d2h
kanister-svc-58c57d6bd5-whvsm            1/1     Running   0          2d
logging-svc-548cd8d4cb-gfzj7             0/1     Pending   0          2d2h
metering-svc-6d495fc67b-29p2c            0/1     Pending   0          2d2h
prometheus-server-57f89ff746-b7nv7       0/2     Pending   0          2d2h
state-svc-5759d7b569-gss7q               2/2     Running   0          2d
sroot@bastion ~]# oc get pvc
NAME                 STATUS    VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
jobs-pv-claim        Pending                                                                2d2h
k10-grafana          Pending                                                                2d2h
logging-pv-claim     Pending                                                                2d2h
metering-pv-claim    Pending                                                                2d2h
nfs-pvc-kasten-uat   Bound     nfs-pv-kasten-uat   200Gi      RWX            nfs            37h
prometheus-server    Pending                                                                2d2h

 


catalog and executor service seems stuck in pending?

catalog-svc-5f994799b9-x2gsd             0/2     Pending   0          37h

jobs-svc-78df978655-tvr7s                0/1     Pending   0          2d2h

 

Did you try suggested action? Maybe kill problematic pods?

 

Did you open a support case directly to kasten support?


Yes I tried to kill the pods which is in pending state but it didn’t resolve. I raised the ticket but response very slow.


Maybe @GatienGHEZA  or @Geoff Burke could have some ideas about your problem :)


Was thinking to move it to K10 support - but seems I can’t. 

Maybe @Debarshi_K10  can pipe up here.


Hi,

 

Looks like the pods that need a pvc are not created here. Do you have a StorageClass with DynamicProvisioning in your cluster @ArshadSk ? If you use NFS you’ll need to first create your PV for those PVC to be claimed.

 

Keep me in touch!

 

Gatien

 


Comment