Configure Kubernetes persistent volumes with NFS storage for container data persistence

Intermediate 25 min Apr 04, 2026 331 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up NFS-backed persistent volumes in Kubernetes to provide shared, durable storage for containerized applications across multiple nodes with automatic failover capabilities.

Prerequisites

  • Running Kubernetes cluster
  • Administrative access to cluster nodes
  • Dedicated server or VM for NFS storage
  • Network connectivity between NFS server and Kubernetes nodes

What this solves

Kubernetes pods are ephemeral by design, meaning data stored inside containers is lost when pods restart or move between nodes. This tutorial shows you how to configure Network File System (NFS) as a persistent storage backend for Kubernetes, enabling containers to maintain data across restarts, scaling events, and node failures. NFS provides shared storage that multiple pods can access simultaneously, making it ideal for applications requiring shared file access or ReadWriteMany volume access modes.

Step-by-step installation

Update system packages

Start by updating your package manager on all nodes to ensure you get the latest versions of NFS components.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install NFS server components

Install the NFS server package on your designated storage node. This node will serve as the central storage location for all persistent volumes.

sudo apt install -y nfs-kernel-server nfs-common
sudo dnf install -y nfs-utils

Create NFS storage directory

Create a dedicated directory for NFS exports and set appropriate permissions. This directory will contain all persistent volume data for your Kubernetes cluster.

sudo mkdir -p /srv/nfs/k8s-storage
sudo chown nobody:nogroup /srv/nfs/k8s-storage
sudo chmod 755 /srv/nfs/k8s-storage
Note: The nobody:nogroup ownership ensures the NFS server can manage file permissions correctly across different client systems.

Configure NFS exports

Define which directories to share and specify access permissions for Kubernetes nodes. Replace the IP range with your actual cluster network.

/srv/nfs/k8s-storage 203.0.113.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)

Apply the export configuration and start the NFS service.

sudo exportfs -ra
sudo systemctl enable --now nfs-kernel-server
sudo systemctl status nfs-kernel-server

Configure firewall for NFS

Open the necessary ports for NFS communication between the server and Kubernetes nodes.

sudo ufw allow from 203.0.113.0/24 to any port nfs
sudo ufw allow from 203.0.113.0/24 to any port 2049
sudo ufw allow from 203.0.113.0/24 to any port 111
sudo firewall-cmd --permanent --add-service=nfs
sudo firewall-cmd --permanent --add-service=rpc-bind
sudo firewall-cmd --permanent --add-service=mountd
sudo firewall-cmd --reload

Install NFS client on Kubernetes nodes

Install NFS client utilities on all Kubernetes worker and master nodes to enable mounting NFS shares.

sudo apt install -y nfs-common
sudo dnf install -y nfs-utils

Test NFS connectivity

Verify that Kubernetes nodes can successfully mount the NFS share before proceeding with persistent volume configuration.

sudo mkdir -p /mnt/nfs-test
sudo mount -t nfs 203.0.113.10:/srv/nfs/k8s-storage /mnt/nfs-test
echo "NFS test successful" | sudo tee /mnt/nfs-test/test.txt
cat /mnt/nfs-test/test.txt
sudo umount /mnt/nfs-test
sudo rmdir /mnt/nfs-test

Create persistent volume

Define a Kubernetes persistent volume that uses your NFS server as the storage backend. This volume can be shared across multiple pods.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-01
  labels:
    type: nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 203.0.113.10
    path: "/srv/nfs/k8s-storage"
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-storage

Apply the persistent volume configuration to your cluster.

kubectl apply -f nfs-persistent-volume.yaml
kubectl get pv

Create storage class

Define a storage class for NFS volumes to enable dynamic provisioning and standardize storage configuration across your cluster.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
kubectl apply -f nfs-storage-class.yaml
kubectl get storageclass

Create persistent volume claim

Create a persistent volume claim that applications can use to request storage from the NFS-backed persistent volume.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc-01
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-storage
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      type: nfs
kubectl apply -f nfs-persistent-volume-claim.yaml
kubectl get pvc

Deploy test application

Create a test deployment that uses the NFS persistent volume to verify data persistence across pod restarts and scaling operations.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-test-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nfs-test-app
  template:
    metadata:
      labels:
        app: nfs-test-app
    spec:
      containers:
      - name: test-container
        image: nginx:alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nfs-storage
          mountPath: /usr/share/nginx/html/data
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo $(date) >> /usr/share/nginx/html/data/timestamps.log; sleep 30; done & nginx -g 'daemon off;'"]
      volumes:
      - name: nfs-storage
        persistentVolumeClaim:
          claimName: nfs-pvc-01
kubectl apply -f nfs-test-deployment.yaml
kubectl get pods -l app=nfs-test-app

Verify your setup

Check that your NFS persistent volumes are working correctly and data persists across pod operations.

# Check persistent volume status
kubectl get pv nfs-pv-01 -o wide

Check persistent volume claim binding

kubectl get pvc nfs-pvc-01 -o wide

Verify pods are running

kubectl get pods -l app=nfs-test-app

Check data persistence

kubectl exec -it deployment/nfs-test-app -- tail -f /usr/share/nginx/html/data/timestamps.log

Test data persistence by deleting a pod

kubectl delete pod -l app=nfs-test-app --grace-period=0 --force

Verify data survives pod restart

kubectl wait --for=condition=ready pod -l app=nfs-test-app --timeout=60s kubectl exec -it deployment/nfs-test-app -- cat /usr/share/nginx/html/data/timestamps.log

You can also verify the NFS server side to confirm files are being written:

sudo ls -la /srv/nfs/k8s-storage/
sudo tail -f /srv/nfs/k8s-storage/timestamps.log

Common issues

SymptomCauseFix
PVC stuck in Pending state No matching PV available Check PV labels match PVC selector: kubectl describe pvc nfs-pvc-01
Mount operation not permitted NFS export permissions too restrictive Add insecure option to /etc/exports and run sudo exportfs -ra
Connection refused to NFS server Firewall blocking NFS ports Open ports 2049, 111, and related RPC ports on NFS server
Permission denied when writing files Incorrect directory ownership Set ownership: sudo chown -R nobody:nogroup /srv/nfs/k8s-storage
Stale file handle errors NFS server restarted or network interruption Restart affected pods: kubectl delete pod -l app=your-app
Volume not mounting in pods nfs-common not installed on nodes Install NFS client on all Kubernetes nodes
Important: Always use proper file permissions instead of chmod 777. NFS requires careful permission management - use chown to set correct ownership and minimal necessary permissions.

Production considerations

For production deployments, consider implementing these additional configurations for reliability and security.

Enable NFS server high availability

Configure NFS server clustering or use managed NFS services to eliminate single points of failure.

# Example: Configure NFS with DRBD for HA (requires additional setup)
sudo apt install -y drbd-utils

Configure DRBD, Pacemaker, and Corosync for NFS HA

Implement backup strategy

Set up automated backups of NFS storage to protect against data loss. You can integrate this with existing backup solutions as shown in our backup automation tutorial.

# Create backup script
sudo mkdir -p /opt/nfs-backup
sudo tee /opt/nfs-backup/backup-nfs.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/backup/nfs/$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"
rsync -av --delete /srv/nfs/k8s-storage/ "$BACKUP_DIR/"
find /backup/nfs/ -type d -mtime +30 -exec rm -rf {} \;
EOF

sudo chmod 755 /opt/nfs-backup/backup-nfs.sh

Monitor NFS performance

Implement monitoring to track NFS performance and identify bottlenecks. This complements broader infrastructure monitoring solutions.

# Check NFS server statistics
sudo nfsstat -s

Monitor client connections

sudo ss -tuln | grep 2049

Check export status

sudo exportfs -v

Next steps

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.