Kubernetes StorageClass Explained: Dynamic Provisioning and Defaults
Understand how StorageClass enables dynamic provisioning in Kubernetes, how default classes work, and how to choose the right storage policy.
StorageClass is where Kubernetes storage stops being just “a disk” and starts becoming policy. It decides how PVCs are provisioned, what kind of storage they get, and what should happen after claims are deleted.
What StorageClass controls
- which provisioner creates the volume
- which parameters the backend should use
- reclaim behavior after delete
- binding behavior in topology-aware clusters
- sometimes expansion and performance expectations
In practice, StorageClass is the default contract behind PVC creation.
Why StorageClass matters
If you get StorageClass wrong, PVCs may sit in Pending, data may be deleted unexpectedly, or expensive premium volumes may be provisioned for the wrong workloads.
This is why StorageClass is not just a storage admin object. It affects app teams every day.
It is also the missing link between this page and kubernetes-quickstart-pv-pvc.md: PVC is where the workload asks for storage, while StorageClass is how the cluster decides which provisioner should create that storage dynamically.
Minimal example
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: csi.example.com
parameters:
type: ssd
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
The fields to understand first
provisioner
This identifies the storage backend integration, usually a CSI driver.
If the provisioner is missing or unhealthy, PVCs may never bind.
reclaimPolicy
Delete: remove the underlying volume when the claim goes awayRetain: keep the underlying volume for manual cleanup or recovery
This setting has real data-loss implications.
volumeBindingMode
This matters in clusters with zone-aware storage.
Immediate: bind as soon as the PVC is createdWaitForFirstConsumer: wait until a Pod is scheduled so storage can align with placement
For zonal backends, WaitForFirstConsumer is often safer.
allowVolumeExpansion
Lets claims grow when the backend supports it. Useful, but still something you should test before relying on in production.
Default StorageClass behavior
Many clusters have a default StorageClass. If a PVC does not specify one, Kubernetes uses the default.
This is convenient, but also risky when teams assume they know what default means and the platform changes it later.
Know your default class before trusting it with real data.
A good starting strategy
For many teams, two classes are enough at the beginning:
standard: cheaper general-purpose storagefast: higher-performance storage for databases or latency-sensitive systems
Too many classes too early usually create confusion without adding much real value.
StorageClass vs PV/PVC
- PVC says what the app wants.
- PV is the resulting storage object.
- StorageClass defines how the platform should fulfill the request.
That separation is useful because it lets app YAML stay stable even if storage implementation changes underneath.
If you want the workload-facing view first, read kubernetes-quickstart-pv-pvc.md. If you want the platform-facing view, start here and treat StorageClass as the policy layer behind dynamic provisioning.
Common provisioners you will actually see
Different clusters use different provisioners depending on environment and storage expectations. A few common choices show up again and again:
rancher/local-path-provisioner
- GitHub:
https://github.com/rancher/local-path-provisioner - Best for: local labs, K3s, single-node or lightweight dev clusters
- Deployment style: usually installed by K3s automatically, or applied as manifests in a small lab cluster
- Tradeoff: simple and fast to start with, but not suitable for serious multi-node durability
kubernetes-sigs/nfs-subdir-external-provisioner
- GitHub:
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner - Best for: shared RWX workloads backed by an existing NFS server
- Deployment style: usually installed with Helm or upstream manifests
- Tradeoff: easy shared storage, but performance and locking behavior depend heavily on the NFS backend
longhorn/longhorn
- GitHub:
https://github.com/longhorn/longhorn - Best for: bare-metal or self-managed clusters that need distributed block storage
- Deployment style: commonly installed with Helm or the Longhorn UI/manifest flow
- Tradeoff: good Kubernetes-native operator experience, but still requires capacity planning and operational discipline
rook/rook with Ceph CSI
- GitHub:
https://github.com/rook/rook - Best for: teams that want a full distributed storage platform inside Kubernetes
- Deployment style: deploy the Rook operator first, then create a Ceph cluster and Ceph-backed StorageClasses
- Tradeoff: powerful and flexible, but much heavier operationally than lab-grade provisioners
kubernetes-sigs/aws-ebs-csi-driver
- GitHub:
https://github.com/kubernetes-sigs/aws-ebs-csi-driver - Best for: EKS or AWS-based clusters using zonal block volumes
- Deployment style: often installed as an EKS add-on or with Helm
- Tradeoff: production-standard for AWS block storage, but tied to zonal behavior and cloud provider limits
Typical installation patterns
You do not usually install a provisioner by applying a StorageClass first. The usual order is:
- install the provisioner or CSI driver
- confirm its controller and node components are healthy
- create one or more StorageClasses that reference that provisioner
- create a test PVC and verify provisioning actually works
That order matters. A StorageClass can exist long before the provisioner behind it is actually functional.
Example StorageClass definitions for common CSI backends
Longhorn example
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
Rook Ceph RBD example
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
pool: replicapool
imageFormat: "2"
imageFeatures: layering
reclaimPolicy: Delete
allowVolumeExpansion: true
AWS EBS CSI example
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
fsType: ext4
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
These examples are not universal defaults. They are starting points. Always verify the provisioner name, supported parameters, reclaim policy, and topology behavior in the driver documentation before using them.
Common failure modes
PVC stays Pending
Start with:
- does the StorageClass exist?
- does the named provisioner actually run?
- does the backend support the requested size and access mode?
- is topology blocking provisioning?
Wrong storage tier gets used
This often happens when people rely on the default class without realizing what it points to.
Data disappears after cleanup
Very often a reclaim policy surprise.
If the class uses Delete, removing the claim may remove the underlying storage too.
Topology and scheduling
Some storage backends are tied to zones. If storage is provisioned too early in the wrong zone, Pods may later fail to schedule where they need to.
That is why volumeBindingMode is not a minor detail. It can decide whether storage and scheduling cooperate or fight each other.
Cost and performance tradeoffs
StorageClass is one of the easiest places to accidentally overspend.
- high-IOPS classes feel safe but can be expensive
- cheap network storage may be flexible but too slow for write-heavy databases
- over-provisioning performance is common when teams do not measure actual needs
It is usually better to start with a small set of clear policies and adjust based on real metrics.
A practical validation sequence
Before trusting a new StorageClass, run a small test:
kubectl get storageclass
kubectl apply -f pvc-test.yaml
kubectl get pvc
kubectl describe pvc test-claim
kubectl get pv
This tiny smoke test catches bad provisioners, wrong reclaim settings, and topology surprises early.
If you want to see how the PVC side of this flow looks from the workload perspective, jump to kubernetes-quickstart-pv-pvc.md and compare the claim YAML there with the StorageClass examples here.
FAQ
Q: Do I always need to set storageClassName on a PVC?
A: Not always, but relying on the default class is only safe when you really know what that default is and why it exists.
Q: When should I use Retain instead of Delete?
A: Use Retain when accidental deletion would be expensive or when you want a recovery window before cleaning up the underlying volume.
Q: Why is WaitForFirstConsumer useful?
A: Because it lets Kubernetes consider Pod scheduling before finalizing volume placement, which helps with zonal backends and topology-aware clusters.
Next reading
- Continue with
kubernetes-quickstart-pv-pvc.mdif you want the workload-facing side. - Read
kubernetes-quickstart-statefulset.mdto see how storage policy affects real stateful apps. - For databases, continue into the MySQL quickstart pages.
Wrap-up
StorageClass is where cluster defaults quietly become workload behavior. If you understand the default class, reclaim policy, and binding mode, you avoid a surprising amount of storage pain.