Thursday, December 26, 2024
Google search engine
HomeGuest BlogsConfigure NFS as Kubernetes Persistent Volume Storage

Configure NFS as Kubernetes Persistent Volume Storage

The Network File System (NFS) is a protocol that enables computers to access or share files over the network. It’s an easy solution to have a dedicated file storage where multiple users and heterogeneous client devices can retrieve data from centralized disk capacity. NFS protocol has been in active development since its first development by Sun Microsystems in 1984 and it offers the following set of features.

  • NFS Over TCP/UDP support
  • Support for Kerberos
  • Support for WebNFS
  • NFS Version 2, NFS Version 3, and NFS Version 4 Protocols
  • Large File support with the added ability to manipulate files larger than 2 Gbytes.
  • NFS Server Logging – a record of file operations that have been performed on its file systems are kept.
  • Several extensions for NFS Mounting with the automountd command.
  • Security Negotiation for the WebNFS Service – The client is able to negotiate a security mechanism with an NFS server.
  • Network Lock Manager and NFS

It is worth mentioning that NFS works in a client/server model; the server stores shared data and manages access authorization of the clients. Once authenticated, the client systems are able to access the files just as if they existed on the local system.

The Need for Persistent Storage in Kubernetes

Within Kubernetes ecosystem, a PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. A PersistentVolumeClaim (PVC) is a request for storage by a user in the cluster. PVCs are requests for PV resources and also act as claim checks to the resource. Persistent Volumes may be provisioned either statically or dynamically(using StorageClass).

Containers were developed to be stateless and ephemeral just to speed up the application launch process. Whenever data needs to persist after the container dies, the stateless design is problematic. To ensure that data persists beyond the container’s lifecycle, the best practice is to separate data management from containers using Kubernetes Storage plugins. The storage plugins enables you to create, resize, delete, and mount persistent volumes in your containers.

Configure NFS as Kubernetes Persistent Volume Storage

We’ll begin with the configuration of NFS server then perform an integration into Kubernetes environment.

For Production grade persistent storage solution we recommend rook: How To Deploy Rook Ceph Storage on Kubernetes Cluster

1. Create Data directory for k8s on NFS Server

On my NFS server I have an LVM volume group with available storage space.

[root@kvm02 ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  data   1   2   0 wz--n- 931.51g 151.51g
  rl     1   3   0 wz--n- 475.35g      0

I’ll create a logical volume just for Kubernetes containers data.

$ sudo lvcreate -n k8s_data -L 30G data
  Logical volume "k8s_data" created.

Next we create a filesystem on the logical volume.

$ sudo mkfs.xfs /dev/data/k8s_data
meta-data=/dev/data/k8s_data     isize=512    agcount=4, agsize=1966080 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=7864320, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=3840, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.

Create mount directory for the partition.

sudo mkdir -p /data/k8s

Edit the /etc/fstab file to configure persistent mounting.

$ sudo vim /etc/fstab
#K8s data mount point
/dev/data/k8s_data   /data/k8s xfs defaults 0 0

Save the file and confirm it works.

sudo mount /data/k8s/

Check using df command:

$  df -hT /data/k8s/
Filesystem                Type  Size  Used Avail Use% Mounted on
/dev/mapper/data-k8s_data xfs    30G  247M   30G   1% /data/k8s

2. Install and Configure NFS Server

Install NFS Server packages on your system.

### Debian / Ubuntu ###
sudo apt update
sudo apt install nfs-kernel-server

### RHEL based systems ###
sudo yum -y install nfs-utils

Set your domain name in the file /etc/idmapd.conf

$ sudo vim /etc/idmapd.conf
Domain = yourdomain.com

Configure NFS exports

$ sudo vim /etc/exports
 /data/k8s/ 192.168.1.0/24(rw,no_root_squash)

Restart NFS server

### Debian / Ubuntu Linux systems ###
sudo systemctl restart nfs-server

### RHEL based Linux systems ###
sudo systemctl enable --now rpcbind nfs-server
sudo firewall-cmd --add-service=nfs --permanent
sudo firewall-cmd --reload

3. Setup Kubernetes NFS Subdir External Provisioner

With the NFS server configured, we’ll configure NFS subdir external provisioner which is an automatic provisioner that uses existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

Deploy NFS Subdir External Provisioner in Kubernetes cluster

Make sure your NFS server is configured and accessible from your Kubernetes cluster. The minimum information required to connect to the NFS server is hostname/IP address and exported share path.

Install Helm on your system.

curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Confirm helm is installed and working.

$ helm version
version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}

Also check if your kubectl command can talk to kubernetes cluster.

$ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
master01   Ready    control-plane   16d   v1.24.6
node01     Ready    <none>          16d   v1.24.6
node02     Ready    <none>          15d   v1.24.6
node03     Ready    <none>          15d   v1.24.6

The nfs-subdir-external-provisioner charts installs custom storage class into a Kubernetes cluster using the Helm package manager. It will also install NFS client provisioner into the cluster which dynamically creates persistent volumes from single NFS share.

Let’s add helm chart repo.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

Create a namespace called nfs-provisioner:

$ kubectl create ns nfs-provisioner
namespace/nfs-provisioner created

Set NFS Server

NFS_SERVER=172.20.30.7
NFS_EXPORT_PATH=/data/k8s

Deploy NFS provisioner resources in your cluster using helm

helm -n  nfs-provisioner install nfs-provisioner-01 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFS_SERVER \
    --set nfs.path=$NFS_EXPORT_PATH \
    --set storageClass.defaultClass=true \
    --set replicaCount=1 \
    --set storageClass.name=nfs-01 \
   --set storageClass.provisionerName=nfs-provisioner-01  

Check Helm configuration parameters for NFS provisioner to see what values you can set while installing. See this deployment with extra settings.

Command execution output:

NAME: nfs-provisioner-01
LAST DEPLOYED: Thu Dec  1 00:17:47 2022
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None

NFS client provisioner is deployed in default namespace.

$ kubectl get pods -n nfs-provisioner
NAME                                                              READY   STATUS    RESTARTS   AGE
nfs-provisioner-nfs-subdir-external-provisioner-58bcd67f5bx9mvr   1/1     Running   0          3m34s

The name of the storageClass created is nfs-client

$ kubectl get sc  -n nfs-provisioner
NAME         PROVISIONER                                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-provisioner-nfs-subdir-external-provisioner   Delete          Immediate           true                   9m26s

Installing Multiple Provisioners

It is possible to install multiple NFS provisioners in your cluster to have access to multiple nfs servers and/or multiple exports from a single nfs server.

Each provisioner must have a different storageClass.provisionerName and a different storageClass.name. For example:

helm install  -n nfs-provisioner second-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/second/exported/path \
    --set storageClass.name=second-nfs-client \
    --set storageClass.provisionerName=k8s-sigs.io/second-nfs-subdir-external-provisioner

4. Test the setup

Now we’ll test our NFS subdir external provisioner by creating a persistent volume claim and a pod that writes a test file to the volume. This will make sure that the provisioner is provisioning and that the NFS server is reachable and writable.

Create PVC manifest file.

$ vim nfs-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-test-claim
spec:
  storageClassName: nfs-01
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi

Create Pod deployment manifest.

$ vim nfs-claim-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nfs-test-pod
spec:
  containers:
  - name: nfs-test
    image: busybox:stable
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: nfs-test-claim

Create PVC resource.

$ kubectl apply -f nfs-claim.yaml
persistentvolumeclaim/nfs-test-claim created

Create test Pod that claims storage.

$ kubectl apply -f nfs-claim-pod.yaml
pod/nfs-test-pod created

Check pod status.

$ kubectl get pod nfs-test-pod
NAME           READY   STATUS      RESTARTS   AGE
nfs-test-pod   0/1     Completed   0          61s

On your NFS server check for the SUCCESS inside the PVC’s directory.

$ ls /data/k8s/
default-nfs-test-claim-pvc-831ea7a2-1ec5-4ed4-a841-bab7f7bd1a87

$  ls  /data/k8s/default-nfs-test-claim-pvc-831ea7a2-1ec5-4ed4-a841-bab7f7bd1a87/
SUCCESS

You can delete test resources and confirm PVC’s directory is deleted as well.

$ kubectl delete -f nfs-claim-pod.yaml
pod "nfs-test-pod" deleted

$ kubectl delete -f nfs-claim.yaml
persistentvolumeclaim "nfs-test-claim" deleted

5. Uninstalling the chart (if you want to delete)

To uninstall/delete the nfs provisioner deployment, run:

$ helm delete release-name -n nfs-provisioner

Example
$ helm list -n nfs-provisioner
NAME              	NAMESPACE      	REVISION	UPDATED                            	STATUS  	CHART                                 	APP VERSION
nfs-provisioner-01	nfs-provisioner	1       	2022-12-17 23:13:21.08543 +0300 EAT	deployed	nfs-subdir-external-provisioner-4.0.17	4.0.2

$ helm delete nfs-provisioner-01 -n <NAMESPACE>
release "nfs-provisioner-01" uninstalled

Books For Learning Kubernetes Administration:

Wrapping up

In this blog post we’ve been able to configure an NFS server and deploy automatic provisioner on Kubernetes for it. We further tested the creation of PVCs with the storage class and this was successful. In future articles we’ll have a discuss about other storage solutions that can be used on Kubernetes.

More articles on Kubernetes storage:

RELATED ARTICLES

Most Popular

Recent Comments