Question

"Failed to find included PVCs"

  • 18 September 2022
  • 1 comment
  • 53 views

Userlevel 1

After my previous post,

I deleted all my snapshots in VolumeSnashots and VolumeSnapshotContents and, after reinstalling Kasten, running k10_primer, I find that my basic PVC backups aren’t working with the following error:

“Failed to find included PVCs”

K10 version: 5.0.8

Status:

[{"name":"admin","description":"Admin Service","passed":true},{"name":"auth","description":"Auth Service","passed":true},{"name":"bloblifecyclemanager","description":"Bloblifecyclemanager Service","passed":true},{"name":"catalog","description":"Catalog Service","passed":true},{"name":"controllermanager","description":"Controllermanager Service","passed":true},{"name":"crypto","description":"Crypto Service","passed":true},{"name":"dashboardbff","description":"Dashboardbff Service","passed":true},{"name":"events","description":"Events Service","passed":true},{"name":"executor","description":"Executor Service","passed":true},{"name":"jobs","description":"Jobs Service","passed":true},{"name":"logging","description":"Logging Service","passed":true},{"name":"metering","description":"Metering Service","passed":true},{"name":"state","description":"State Service","passed":true},{"name":"vbrintegrationapi","description":"Vbrintegrationapi Service","passed":true}]

 

Volume exists, k10_primer looks fine…. no clue why it’s not working.

Primer result:

Kubernetes Version Check:
  Valid kubernetes version (v1.21.12)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

         Found multiple snapshot API group versions, using preferred.
CSI Capabilities Check:
  Using CSI GroupVersion snapshot.storage.k8s.io/v1  -  OK

         Found multiple snapshot API group versions, using preferred.
         Found multiple snapshot API group versions, using preferred.
I0917 23:38:58.332850       7 request.go:601] Waited for 1.046419927s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/ceph.rook.io/v1
         Found multiple snapshot API group versions, using preferred.
Validating Provisioners: 
rook-ceph.cephfs.csi.ceph.com:
  Is a CSI Provisioner  -  OK
  CSI Provisioner doesn't have VolumeSnapshotClass  -  Error
  Storage Classes:
    rook-cephfs
      Valid Storage Class  -  OK
    rook-cephfs-ec
      Valid Storage Class  -  OK

driver.longhorn.io:
  Is a CSI Provisioner  -  OK
  CSI Provisioner doesn't have VolumeSnapshotClass  -  Error
  Storage Classes:
    longhorn
      Valid Storage Class  -  OK
    longhorn-2
      Valid Storage Class  -  OK
    longhorn-db
      Valid Storage Class  -  OK
    longhorn-static
      Valid Storage Class  -  OK

rook-ceph.rbd.csi.ceph.com:
  Is a CSI Provisioner  -  OK
  Storage Classes:
    rook-ceph-block
      Valid Storage Class  -  OK
  Volume Snapshot Classes:
    csi-rbdplugin-snapclass
      Has k10.kasten.io/is-snapshot-class annotation set to true  -  OK
      Has deletionPolicy 'Delete'  -  OK
    k10-clone-csi-rbdplugin-snapclass

Validate Generic Volume Snapshot:
  Pod created successfully  -  OK
  GVS Backup command executed successfully  -  OK
  Pod deleted successfully  -  OK

 

 

I am using Ceph RBD, so the other warnings about the other CSI provisioners are fine, always been like this.

 

 

- cause:
cause:
fields:
- name: objectName
value: ubuntu
file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:674
function: kasten.io/k10/kio/exec/phases/backup.WorkloadVolumeMap
linenumber: 674
message: Failed to find included PVCs
file: kasten.io/k10/kio/exec/phases/backup/snapshot_data_phase.go:346
function: kasten.io/k10/kio/exec/phases/backup.processVolumeArtifacts
linenumber: 346
message: Failed to retrieve volumes for workload
message: Job failed to be executed

 


1 comment

Userlevel 3
Badge

Hello @voarsh 

Please let us know if you are still facing this issue.

Regards

Fernando R.

Comment