Set up NFS-backed persistent volumes in Kubernetes to provide shared, durable storage for containerized applications across multiple nodes with automatic failover capabilities.
Prerequisites
- Running Kubernetes cluster
- Administrative access to cluster nodes
- Dedicated server or VM for NFS storage
- Network connectivity between NFS server and Kubernetes nodes
What this solves
Kubernetes pods are ephemeral by design, meaning data stored inside containers is lost when pods restart or move between nodes. This tutorial shows you how to configure Network File System (NFS) as a persistent storage backend for Kubernetes, enabling containers to maintain data across restarts, scaling events, and node failures. NFS provides shared storage that multiple pods can access simultaneously, making it ideal for applications requiring shared file access or ReadWriteMany volume access modes.
Step-by-step installation
Update system packages
Start by updating your package manager on all nodes to ensure you get the latest versions of NFS components.
sudo apt update && sudo apt upgrade -y
Install NFS server components
Install the NFS server package on your designated storage node. This node will serve as the central storage location for all persistent volumes.
sudo apt install -y nfs-kernel-server nfs-common
Create NFS storage directory
Create a dedicated directory for NFS exports and set appropriate permissions. This directory will contain all persistent volume data for your Kubernetes cluster.
sudo mkdir -p /srv/nfs/k8s-storage
sudo chown nobody:nogroup /srv/nfs/k8s-storage
sudo chmod 755 /srv/nfs/k8s-storage
Configure NFS exports
Define which directories to share and specify access permissions for Kubernetes nodes. Replace the IP range with your actual cluster network.
/srv/nfs/k8s-storage 203.0.113.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
Apply the export configuration and start the NFS service.
sudo exportfs -ra
sudo systemctl enable --now nfs-kernel-server
sudo systemctl status nfs-kernel-server
Configure firewall for NFS
Open the necessary ports for NFS communication between the server and Kubernetes nodes.
sudo ufw allow from 203.0.113.0/24 to any port nfs
sudo ufw allow from 203.0.113.0/24 to any port 2049
sudo ufw allow from 203.0.113.0/24 to any port 111
Install NFS client on Kubernetes nodes
Install NFS client utilities on all Kubernetes worker and master nodes to enable mounting NFS shares.
sudo apt install -y nfs-common
Test NFS connectivity
Verify that Kubernetes nodes can successfully mount the NFS share before proceeding with persistent volume configuration.
sudo mkdir -p /mnt/nfs-test
sudo mount -t nfs 203.0.113.10:/srv/nfs/k8s-storage /mnt/nfs-test
echo "NFS test successful" | sudo tee /mnt/nfs-test/test.txt
cat /mnt/nfs-test/test.txt
sudo umount /mnt/nfs-test
sudo rmdir /mnt/nfs-test
Create persistent volume
Define a Kubernetes persistent volume that uses your NFS server as the storage backend. This volume can be shared across multiple pods.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-01
labels:
type: nfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 203.0.113.10
path: "/srv/nfs/k8s-storage"
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-storage
Apply the persistent volume configuration to your cluster.
kubectl apply -f nfs-persistent-volume.yaml
kubectl get pv
Create storage class
Define a storage class for NFS volumes to enable dynamic provisioning and standardize storage configuration across your cluster.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
kubectl apply -f nfs-storage-class.yaml
kubectl get storageclass
Create persistent volume claim
Create a persistent volume claim that applications can use to request storage from the NFS-backed persistent volume.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-01
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-storage
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: nfs
kubectl apply -f nfs-persistent-volume-claim.yaml
kubectl get pvc
Deploy test application
Create a test deployment that uses the NFS persistent volume to verify data persistence across pod restarts and scaling operations.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-test-app
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nfs-test-app
template:
metadata:
labels:
app: nfs-test-app
spec:
containers:
- name: test-container
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nfs-storage
mountPath: /usr/share/nginx/html/data
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date) >> /usr/share/nginx/html/data/timestamps.log; sleep 30; done & nginx -g 'daemon off;'"]
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: nfs-pvc-01
kubectl apply -f nfs-test-deployment.yaml
kubectl get pods -l app=nfs-test-app
Verify your setup
Check that your NFS persistent volumes are working correctly and data persists across pod operations.
# Check persistent volume status
kubectl get pv nfs-pv-01 -o wide
Check persistent volume claim binding
kubectl get pvc nfs-pvc-01 -o wide
Verify pods are running
kubectl get pods -l app=nfs-test-app
Check data persistence
kubectl exec -it deployment/nfs-test-app -- tail -f /usr/share/nginx/html/data/timestamps.log
Test data persistence by deleting a pod
kubectl delete pod -l app=nfs-test-app --grace-period=0 --force
Verify data survives pod restart
kubectl wait --for=condition=ready pod -l app=nfs-test-app --timeout=60s
kubectl exec -it deployment/nfs-test-app -- cat /usr/share/nginx/html/data/timestamps.log
You can also verify the NFS server side to confirm files are being written:
sudo ls -la /srv/nfs/k8s-storage/
sudo tail -f /srv/nfs/k8s-storage/timestamps.log
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| PVC stuck in Pending state | No matching PV available | Check PV labels match PVC selector: kubectl describe pvc nfs-pvc-01 |
| Mount operation not permitted | NFS export permissions too restrictive | Add insecure option to /etc/exports and run sudo exportfs -ra |
| Connection refused to NFS server | Firewall blocking NFS ports | Open ports 2049, 111, and related RPC ports on NFS server |
| Permission denied when writing files | Incorrect directory ownership | Set ownership: sudo chown -R nobody:nogroup /srv/nfs/k8s-storage |
| Stale file handle errors | NFS server restarted or network interruption | Restart affected pods: kubectl delete pod -l app=your-app |
| Volume not mounting in pods | nfs-common not installed on nodes | Install NFS client on all Kubernetes nodes |
Production considerations
For production deployments, consider implementing these additional configurations for reliability and security.
Enable NFS server high availability
Configure NFS server clustering or use managed NFS services to eliminate single points of failure.
# Example: Configure NFS with DRBD for HA (requires additional setup)
sudo apt install -y drbd-utils
Configure DRBD, Pacemaker, and Corosync for NFS HA
Implement backup strategy
Set up automated backups of NFS storage to protect against data loss. You can integrate this with existing backup solutions as shown in our backup automation tutorial.
# Create backup script
sudo mkdir -p /opt/nfs-backup
sudo tee /opt/nfs-backup/backup-nfs.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/backup/nfs/$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"
rsync -av --delete /srv/nfs/k8s-storage/ "$BACKUP_DIR/"
find /backup/nfs/ -type d -mtime +30 -exec rm -rf {} \;
EOF
sudo chmod 755 /opt/nfs-backup/backup-nfs.sh
Monitor NFS performance
Implement monitoring to track NFS performance and identify bottlenecks. This complements broader infrastructure monitoring solutions.
# Check NFS server statistics
sudo nfsstat -s
Monitor client connections
sudo ss -tuln | grep 2049
Check export status
sudo exportfs -v
Next steps
- Install and configure Kubernetes cluster with kubeadm and security hardening
- Implement Kubernetes monitoring with Prometheus and Helm charts
- Configure Kubernetes ingress controller with NGINX and SSL certificates
- Set up Kubernetes persistent volume snapshots and backup automation
- Configure Kubernetes network policies for enhanced cluster security
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# NFS Kubernetes Persistent Volume Setup Script
# Production-grade installer for NFS-backed persistent storage
# Colors for output
readonly RED='\033[0;31m'
readonly GREEN='\033[0;32m'
readonly YELLOW='\033[1;33m'
readonly NC='\033[0m'
# Configuration
readonly NFS_DIR="/srv/nfs/k8s-storage"
readonly EXPORTS_FILE="/etc/exports"
readonly SCRIPT_NAME=$(basename "$0")
# Global variables
CLUSTER_NETWORK=""
NFS_SERVER_IP=""
MODE=""
PKG_MGR=""
PKG_INSTALL=""
FIREWALL_CMD=""
NFS_SERVICE=""
# Cleanup function
cleanup() {
echo -e "${RED}[ERROR] Script failed. Cleaning up...${NC}" >&2
if [[ -d "${NFS_DIR}" ]] && [[ -z "$(ls -A ${NFS_DIR})" ]]; then
rm -rf "${NFS_DIR}"
fi
exit 1
}
trap cleanup ERR
usage() {
cat << EOF
Usage: $SCRIPT_NAME [server|client] <cluster_network> [nfs_server_ip]
Examples:
$SCRIPT_NAME server 192.168.1.0/24
$SCRIPT_NAME client 192.168.1.0/24 192.168.1.10
Arguments:
mode 'server' or 'client'
cluster_network CIDR notation (e.g., 192.168.1.0/24)
nfs_server_ip NFS server IP (required for client mode)
EOF
exit 1
}
log_info() {
echo -e "${GREEN}$1${NC}"
}
log_warn() {
echo -e "${YELLOW}$1${NC}"
}
log_error() {
echo -e "${RED}$1${NC}" >&2
}
detect_distro() {
if [[ ! -f /etc/os-release ]]; then
log_error "Cannot detect distribution. /etc/os-release not found."
exit 1
fi
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_INSTALL="apt install -y"
FIREWALL_CMD="ufw"
NFS_SERVICE="nfs-kernel-server"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_INSTALL="dnf install -y"
FIREWALL_CMD="firewalld"
NFS_SERVICE="nfs-server"
;;
amzn)
PKG_MGR="yum"
PKG_INSTALL="yum install -y"
FIREWALL_CMD="firewalld"
NFS_SERVICE="nfs-server"
;;
*)
log_error "Unsupported distribution: $ID"
exit 1
;;
esac
}
check_prerequisites() {
if [[ $EUID -ne 0 ]]; then
log_error "This script must be run as root or with sudo"
exit 1
fi
if ! command -v systemctl &> /dev/null; then
log_error "systemctl is required but not found"
exit 1
fi
}
validate_network() {
if [[ ! $1 =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/[0-9]{1,2}$ ]]; then
log_error "Invalid network format. Use CIDR notation (e.g., 192.168.1.0/24)"
exit 1
fi
}
validate_ip() {
if [[ ! $1 =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
log_error "Invalid IP address format"
exit 1
fi
}
update_system() {
echo "[1/8] Updating system packages..."
case "$PKG_MGR" in
apt)
apt update && apt upgrade -y
;;
dnf|yum)
$PKG_INSTALL update
;;
esac
log_info "System packages updated successfully"
}
install_nfs_server() {
echo "[2/8] Installing NFS server components..."
case "$PKG_MGR" in
apt)
$PKG_INSTALL nfs-kernel-server nfs-common
;;
dnf|yum)
$PKG_INSTALL nfs-utils
;;
esac
log_info "NFS server components installed"
}
install_nfs_client() {
echo "[2/8] Installing NFS client components..."
case "$PKG_MGR" in
apt)
$PKG_INSTALL nfs-common
;;
dnf|yum)
$PKG_INSTALL nfs-utils
;;
esac
log_info "NFS client components installed"
}
create_nfs_directory() {
echo "[3/8] Creating NFS storage directory..."
mkdir -p "${NFS_DIR}"
chown nobody:nogroup "${NFS_DIR}" 2>/dev/null || chown nfsnobody:nfsnobody "${NFS_DIR}"
chmod 755 "${NFS_DIR}"
log_info "NFS storage directory created: ${NFS_DIR}"
}
configure_nfs_exports() {
echo "[4/8] Configuring NFS exports..."
local export_line="${NFS_DIR} ${CLUSTER_NETWORK}(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)"
if ! grep -q "${NFS_DIR}" "${EXPORTS_FILE}" 2>/dev/null; then
echo "${export_line}" >> "${EXPORTS_FILE}"
else
log_warn "Export already exists in ${EXPORTS_FILE}"
fi
exportfs -ra
log_info "NFS exports configured"
}
start_nfs_service() {
echo "[5/8] Starting NFS service..."
systemctl enable --now "${NFS_SERVICE}"
systemctl start rpcbind 2>/dev/null || true
log_info "NFS service started and enabled"
}
configure_firewall() {
echo "[6/8] Configuring firewall..."
case "$FIREWALL_CMD" in
ufw)
if systemctl is-active --quiet ufw; then
ufw allow from "${CLUSTER_NETWORK}" to any port nfs
ufw allow from "${CLUSTER_NETWORK}" to any port 2049
ufw allow from "${CLUSTER_NETWORK}" to any port 111
fi
;;
firewalld)
if systemctl is-active --quiet firewalld; then
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --permanent --add-service=mountd
firewall-cmd --reload
fi
;;
esac
log_info "Firewall configured for NFS"
}
test_nfs_connectivity() {
echo "[7/8] Testing NFS connectivity..."
local test_mount="/mnt/nfs-test"
mkdir -p "${test_mount}"
if mount -t nfs "${NFS_SERVER_IP}:${NFS_DIR}" "${test_mount}"; then
echo "NFS test successful" > "${test_mount}/test.txt"
if [[ -f "${test_mount}/test.txt" ]]; then
log_info "NFS connectivity test passed"
else
log_error "Failed to write test file"
fi
umount "${test_mount}"
else
log_error "Failed to mount NFS share"
exit 1
fi
rmdir "${test_mount}"
}
verify_installation() {
echo "[8/8] Verifying installation..."
if [[ "$MODE" == "server" ]]; then
if systemctl is-active --quiet "${NFS_SERVICE}"; then
log_info "NFS server is running"
else
log_error "NFS server is not running"
exit 1
fi
if showmount -e localhost | grep -q "${NFS_DIR}"; then
log_info "NFS export is active"
else
log_error "NFS export not found"
exit 1
fi
else
if showmount -e "${NFS_SERVER_IP}" | grep -q "${NFS_DIR}"; then
log_info "Can see NFS exports from server"
else
log_error "Cannot see NFS exports from server"
exit 1
fi
fi
log_info "Installation verification completed successfully"
}
generate_k8s_manifests() {
echo "Generating Kubernetes manifests..."
cat > nfs-pv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-01
labels:
type: nfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: ${NFS_SERVER_IP}
path: "${NFS_DIR}"
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-storage
EOF
cat > nfs-storageclass.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
log_info "Kubernetes manifests generated: nfs-pv.yaml, nfs-storageclass.yaml"
}
main() {
if [[ $# -lt 2 ]] || [[ $# -gt 3 ]]; then
usage
fi
MODE="$1"
CLUSTER_NETWORK="$2"
if [[ "$MODE" != "server" ]] && [[ "$MODE" != "client" ]]; then
log_error "Mode must be 'server' or 'client'"
usage
fi
if [[ "$MODE" == "client" ]] && [[ $# -ne 3 ]]; then
log_error "NFS server IP required for client mode"
usage
fi
if [[ $# -eq 3 ]]; then
NFS_SERVER_IP="$3"
validate_ip "$NFS_SERVER_IP"
fi
validate_network "$CLUSTER_NETWORK"
check_prerequisites
detect_distro
log_info "Starting NFS ${MODE} installation for Kubernetes..."
log_info "Cluster network: ${CLUSTER_NETWORK}"
[[ -n "$NFS_SERVER_IP" ]] && log_info "NFS server: ${NFS_SERVER_IP}"
update_system
if [[ "$MODE" == "server" ]]; then
install_nfs_server
create_nfs_directory
configure_nfs_exports
start_nfs_service
configure_firewall
NFS_SERVER_IP=$(hostname -I | awk '{print $1}')
generate_k8s_manifests
else
install_nfs_client
configure_firewall
test_nfs_connectivity
fi
verify_installation
log_info "NFS ${MODE} installation completed successfully!"
if [[ "$MODE" == "server" ]]; then
echo ""
log_info "Next steps:"
echo "1. Run this script on Kubernetes nodes with: $SCRIPT_NAME client $CLUSTER_NETWORK $NFS_SERVER_IP"
echo "2. Apply Kubernetes manifests: kubectl apply -f nfs-pv.yaml -f nfs-storageclass.yaml"
fi
}
main "$@"
Review the script before running. Execute with: bash install.sh