Kubernetes NFS Server: A Practical Guide for Shared Storage in Kubernetes

Kubernetes NFS Server: A Practical Guide for Shared Storage in Kubernetes

In modern Kubernetes environments, stateful applications demand reliable, scalable storage that can be shared across multiple pods. A Kubernetes NFS server offers a simple, proven approach to provide network file system storage for workloads that require ReadWriteMany access. This article explains what a Kubernetes NFS server is, compares common provisioning patterns, and walks you through practical deployment examples, tips for security and performance, and some troubleshooting pointers. By the end, you’ll be able to decide whether static PV provisioning or dynamic provisioning with an NFS client provisioner best fits your cluster.

What is a Kubernetes NFS server and when to use it

NFS, or Network File System, is a well-established protocol for sharing files over a network. A Kubernetes NFS server is a server process that exports a directory (or directories) to clients in the cluster. When you integrate an NFS server with Kubernetes, you typically create PersistentVolumes (PVs) that point to the exported NFS paths, and then PersistentVolumeClaims (PVCs) that pods can bind to. This model makes it possible for multiple pods across nodes to mount the same shared directory, enabling scenarios like:

– Shared storage for web front-ends and statically served assets
– Collaborative workloads that need concurrent access to the same data
– Centralized logs or data lakes that require a common repository

If your workload benefits from a simple, interoperable storage layer and you don’t need ultra-high IOPS or complex storage policies, a Kubernetes NFS server is a solid choice. However, for very large-scale or latency-sensitive deployments, you may want to compare NFS with other storage backends such as distributed block storage, or cloud-native options.

Two common provisioning patterns

– Static provisioning (PV backed by NFS): Create a PersistentVolume that directly references an NFS export (server and path). Then bind it to a PVC. This approach is straightforward and transparent, but you must manage PV creation and lifecycle manually or via a small automation script.
– Dynamic provisioning with an NFS client provisioner: Install an external NFS client provisioner in the cluster. It creates PVs on demand based on a StorageClass, provisioning new PVCs by using an existing NFS server export. This pattern simplifies ongoing management for teams that frequently create and discard volumes.

Both approaches rely on a stable NFS export directory and proper permissions. Your choice depends on how dynamic your storage needs are and how much operational automation you want to maintain.

Setting up a simple NFS server for Kubernetes

Below is a practical, minimal setup to run an NFS server inside the cluster and expose an NFS export that Kubernetes can consume. The example uses a small NFS server container and a hostPath-backed export directory. In production, you may prefer to run the NFS server on dedicated nodes or outside the cluster for resilience and scaling.

Note: The manifests below assume you’re using a commonly available NFS server container image and that you have a node with accessible storage at /srv/nfs. Adjust paths and images to match your environment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-server
  labels:
    app: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      containers:
      - name: nfs-server
        image: itsthenetwork/nfs-server-alpine:latest
        ports:
        - containerPort: 2049
        volumeMounts:
        - name: nfs-data
          mountPath: /exports
      volumes:
      - name: nfs-data
        hostPath:
          path: /srv/nfs
          type: DirectoryOrCreate
apiVersion: v1
kind: Service
metadata:
  name: nfs-service
spec:
  selector:
    app: nfs-server
  ports:
  - protocol: TCP
    port: 2049
    targetPort: 2049

Operational notes:

  • Ensure the /srv/nfs directory on the host exists and is writable by the NFS server process a斐. You may want to set permissions to 0777 for initial testing and tighten later.
  • Configure appropriate export options (e.g., rw,sync,no_subtree_check) on your NFS server. The default image usually exports /exports; the hostPath provides the actual storage directory.
  • Use a dedicated node or a small set of nodes for the NFS server to improve reliability and performance.

Static PV provisioning for NFS

If you prefer explicit control over PVs, you can create a static PV that points to the NFS export, then create a PVC that binds to it.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-5gi
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /exports
    server: 192.168.1.100
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-5gi
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: ""

Usage example in a pod or deployment:

apiVersion: v1
kind: Pod
metadata:
  name: nfs-consumer
spec:
  containers:
  - name: app
    image: alpine
    command: ["/bin/sh", "-c", "while true; do echo hi; sleep 30; done"]
    volumeMounts:
    - name: data
      mountPath: /mnt/nfs
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: pvc-nfs-5gi

Dynamic provisioning with an NFS client provisioner

If your team wants to provision storage on demand, dynamic provisioning with an NFS client provisioner is a popular choice. The idea is to deploy a provisioner in the cluster that talks to an NFS server and creates PVs on demand for PVCs that specify a StorageClass. The exact manifest can vary by provisioner, but the workflow typically looks like this:

– Deploy an NFS client provisioner in a dedicated namespace.
– Create a StorageClass, e.g., nfs-client.
– Create a PVC that requests storage with storageClassName: nfs-client.
– The provisioner creates a PV that points to a unique export path on the NFS server and binds it to the PVC.

A typical deployment for the provisioner (high level) would include:
– A service account with appropriate permissions
– A deployment for the provisioner container
– A secret or ConfigMap for NFS server configuration
– The StorageClass that references the provisioner

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: example.com/nfs
reclaimPolicy: Retain
volumeBindingMode: Immediate

For production, follow the specific provisioner’s docs for exact images, environment variables, and RBAC permissions. The benefit is that developers can request storage without manual PV creation, while the NFS server remains the shared backend.

Usage considerations: performance, security, and backups

– Performance: NFS performance depends on the network, server hardware, and the efficiency of the NFS export. For higher throughput, consider dedicated network interfaces, tuned NFS export options, and caching strategies. If you deploy NFS in a multi-node cluster, ensure the server has enough CPU, memory, and IOPS headroom.
– Security: Use proper access controls on the NFS export. If possible, limit exports to the Kubernetes cluster subnet, and enforce file permissions that align with your app requirements. In production, you may isolate the NFS server behind a network policy or a firewall and consider using Kerberos-based authentication for stronger security.
– Backups and durability: Treat the NFS storage as a critical data path. Regularly back up the exported directory, and test restore procedures. If high availability is needed, run a pair of NFS servers with proper failover, or use a distributed storage solution that aligns with your data protection policy.
– Access modes and concurrency: ReadWriteMany (RWX) is common for shared data, but ensure your workloads can tolerate concurrent access. Some applications may require coordination for exclusive writes; design application logic accordingly.
– Monitoring: Keep an eye on NFS server metrics (latency, queue length, export status) and Kubernetes metrics (PVC usage, node resource pressure) to prevent bottlenecks.

Troubleshooting tips

– Verify the NFS server export is reachable from cluster nodes (e.g., via ping and mount tests from a temporary pod).
– Check file permissions on the exported directory to avoid permission denied errors when pods mount the volume.
– Inspect the PV and PVC bindings with kubectl describe to understand binding status and events.
– Ensure the NFS client in the cluster has networking access to the NFS server and that the export path matches what PVs point to.
– If using a dynamic provisioner, review provisioner logs and StorageClass configuration for misconfigurations.

Conclusion

A Kubernetes NFS server provides a practical bridge between traditional network file systems and modern container orchestration. Whether you choose static PV provisioning backed by a well-defined NFS export or dynamic provisioning with an NFS client provisioner, the core premise remains the same: a reliable shared storage backend that multiple pods can access concurrently. By carefully configuring the NFS server, selecting the right provisioning pattern, and attending to security and performance considerations, you can unlock simple, robust sharing for stateful workloads in Kubernetes. As your cluster evolves, you may revisit storage strategies, but a well-tuned Kubernetes NFS server often remains a dependable cornerstone for many applications.