Skip to main content
Solved

Using NFS Share on NAS as a Backup Location Where to Enter Credentials


Hi,

I’ve been playing around with Kubernetes and have built a small cluster. 
Installed Kasten K10 and it is backing up/snap-shotting, but that will be to storage within the pod it’s installed in.

I have created an NFS share on my NAS, as well as creating a PV and PVC for the share:

nano nfs-share-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
   name: nas01-kasten-backups
   namespace: kasten-io
spec:
   capacity:
      storage: 500Gi
   volumeMode: Filesystem
   accessModes:
      - ReadWriteMany
   persistentVolumeReclaimPolicy: Retain
   storageClassName: nfs
   mountOptions:
      - hard
      - nfsvers=4.1
   nfs:
      path: /volume2/kasten-backups
      server: nas01.my-domain.com

kubectl create -f nfs-share-pv.yaml


nano nfs-share-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: nas01-kasten-backups-pvc
   namespace: kasten-io
spec:
   storageClassName: nfs
   accessModes:
      - ReadWriteMany
   resources:
      requests:
         storage: 500Gi

kubectl create -f nfs-share-pvc.yaml

 

This shows up in my Kubernetes dashboard and I’ve also configured it in Location Profiles in Kasten.

However, there seems to be no way to provide authentication.

Assuming the NAS user kastenbackup has password “mySecurePassword” where do I put this in the YAML to enable access to the share?

I’m very new to the world of Kubernetes :(

Thanks,

Richie

Best answer by Richie Rogers

Hi,

After a lot more digging it turns out that I needed to run the following after editing the /etc/exports file:

sudo exportfs -rav

This had not been made clear on the earlier docs I’d found.

NFS and persistent volume claim all works now.

Thanks for your help,

Richie

View original
Did this topic help you find an answer to your question?

5 comments

Hagag
Forum|alt.badge.img+2
  • Experienced User
  • 154 comments
  • August 21, 2024

Hi @Richie Rogers 

you need to ensure that your NFS server (NAS) is configured to allow access from your Kubernetes nodes.

Here is an example of read and write access from any IP address, with certain options to control access. 
control access should be reviewed to meet your needs
 

/volume2/kasten-backups  *(rw,sync,no_subtree_check,no-root_squash)


BR,
Ahmed Hagag


  • Author
  • Not a newbie anymore
  • 3 comments
  • August 21, 2024

Hi,

I’ve added the Kubernetes internal CIDR to the permissions on the NAS.

I’m not sure where this goes:

/volume2/kasten-backups *(rw,sync,no_subtree_check,no-root_squash)

Is that in the nfs-share-pv.yaml or on the Kubernetes nodes?

I have seen mention of a file /etc/exports (which does not exist on my nodes).

Thanks,
Richie


Geoff Burke
Forum|alt.badge.img+22
  • Veeam Legend, Veeam Vanguard
  • 1313 comments
  • August 21, 2024
Richie Rogers wrote:

Hi,

I’ve added the Kubernetes internal CIDR to the permissions on the NAS.

I’m not sure where this goes:

/volume2/kasten-backups *(rw,sync,no_subtree_check,no-root_squash)

Is that in the nfs-share-pv.yaml or on the Kubernetes nodes?

I have seen mention of a file /etc/exports (which does not exist on my nodes).

Thanks,
Richie

Hi @Richie Rogers 

That would be on your NAS server in the /etc/exports file

 

 


  • Author
  • Not a newbie anymore
  • 3 comments
  • August 21, 2024

Hi,

Ok, just looking at the /etc/exports file on the NAS and it has (I’ve created a new NFS share “kubernetes-filestore”):

/volume2/kubernetes-filestore   10.96.0.0/12(rw,async,no_wdelay,crossmnt,insecure,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)

Also tried:

/volume2/kubernetes-filestore *(rw,sync,no_subtree_check,no-root_squash)

and also tried with “insecure”

I’m using nfs-subdir-external-provisioner and I get this error in the pod:

MountVolume.SetUp failed for volume "nfs-subdir-external-provisioner-root" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs nas01.domain.com:/volume2/kubernetes-filestore /var/lib/kubelet/pods/e2bb098d-6838-420f-9f49-5a9462c68825/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root Output: mount.nfs: access denied by server while mounting nas01.domain.com:/volume2/kubernetes-filestore

Any ideas?

Thanks,

Richie


  • Author
  • Not a newbie anymore
  • 3 comments
  • Answer
  • August 22, 2024

Hi,

After a lot more digging it turns out that I needed to run the following after editing the /etc/exports file:

sudo exportfs -rav

This had not been made clear on the earlier docs I’d found.

NFS and persistent volume claim all works now.

Thanks for your help,

Richie


Comment