Ephemeral Volumes
Ephemeral volumes live with the Pod and fit cache or temp files.
Ephemeral volumes are deleted with the Pod. They are ideal for cache and temporary files that do not need to survive restarts. This makes them lightweight, fast, and safe for transient data.
This quick start expands on use cases, volume types, sizing, and the operational caveats you need to know before relying on ephemeral storage.
What counts as an ephemeral volume
The most common ephemeral volume is emptyDir. Kubernetes also supports generic ephemeral volumes that can use StorageClasses for short-lived claims.
Typical use cases
- Runtime cache and compiled assets
- Intermediate artifacts during data processing
- Shared files between the main container and a sidecar
- Scratch space for temp files
emptyDir basics
emptyDir is created when the Pod is scheduled and deleted when the Pod is removed:
volumes:
- name: scratch
emptyDir:
sizeLimit: 1Gi
You can mount it into multiple containers to share data:
volumeMounts:
- name: scratch
mountPath: /tmp
Memory-backed emptyDir
For high-speed temp data, you can use memory as the medium:
emptyDir:
medium: Memory
sizeLimit: 512Mi
This uses node memory (tmpfs). It is fast but can contribute to OOM if you allocate too much.
Generic ephemeral volume
You can request ephemeral storage from a StorageClass for workloads that need slightly more structure:
volumes:
- name: cache
ephemeral:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast-ssd
resources:
requests:
storage: 5Gi
The PVC is created and deleted with the Pod, which is useful for batch jobs or CI workloads.
Sidecar pattern example
Use a sidecar to ship logs or process files from a shared volume:
containers:
- name: app
image: my-app:latest
volumeMounts:
- name: shared
mountPath: /var/log/app
- name: shipper
image: busybox
command: ["sh", "-c", "tail -F /var/log/app/app.log"]
volumeMounts:
- name: shared
mountPath: /var/log/app
Scheduling and disk pressure
Pods with heavy ephemeral usage can be evicted when nodes experience disk pressure. Kubernetes tracks ephemeral storage usage and may evict Pods that exceed limits. Keep temp data small and use limits where possible.
To view disk pressure and eviction signals:
kubectl describe node <node>
Resource requests and limits
You can set ephemeral-storage requests and limits in container resources:
resources:
requests:
cpu: "100m"
memory: "256Mi"
ephemeral-storage: "1Gi"
limits:
cpu: "500m"
memory: "512Mi"
ephemeral-storage: "2Gi"
This helps the scheduler place Pods on nodes with enough local storage.
Lifecycle and cleanup
Ephemeral volumes are tied to Pod lifecycle. If a Pod is rescheduled to another node, its emptyDir contents are lost. This is fine for caches, but it can break workflows that assume persistence. If you need to keep data across restarts, use PVCs.
Plan for data loss by making caches rebuildable and idempotent.
Ephemeral storage and logs
Container logs are stored on the node and count toward ephemeral storage usage. A chatty container can trigger eviction even if its emptyDir usage is small. Set log rotation and avoid excessive debug logging in production.
Best practices
- Set size limits on
emptyDirwhenever possible. - Keep temp data small and clean it periodically.
- Separate cache from critical data to avoid accidental loss.
- Use memory-backed
emptyDironly for small, latency-sensitive files. - Prefer predictable paths so cleanup scripts are reliable.
Size limits and behavior
emptyDir.sizeLimit is a soft limit enforced by the kubelet. If your Pod exceeds it, you may see eviction events. Always budget ephemeral storage alongside CPU and memory so the scheduler can place Pods correctly.
Pods without explicit ephemeral-storage limits can consume more node disk than expected. Setting requests and limits improves predictability and avoids surprise evictions.
Example: init container for cache warm-up
Use an init container to pre-populate a cache into an ephemeral volume:
initContainers:
- name: warm-cache
image: busybox
command: ["sh", "-c", "echo warm > /cache/seed.txt"]
volumeMounts:
- name: cache
mountPath: /cache
containers:
- name: app
image: my-app:latest
volumeMounts:
- name: cache
mountPath: /cache
Example: build workspace
CI jobs often need a scratch directory to compile code. An emptyDir works well:
volumes:
- name: workspace
emptyDir: {}
volumeMounts:
- name: workspace
mountPath: /workspace
Keep artifacts you care about by uploading them to object storage before the Pod exits.
Eviction thresholds
Kubernetes evicts Pods when node disk pressure exceeds thresholds. Ephemeral volumes and container logs both contribute. If you see frequent evictions, check node disk usage and consider adding larger nodes or spreading workloads.
Generic ephemeral vs PVC
Generic ephemeral volumes are created from a StorageClass and live only for the Pod lifetime. They are useful for batch jobs that need temporary space larger than emptyDir but still do not need persistence. In contrast, PVCs are designed for long-lived data and should be used for databases or stateful services.
Local SSD and performance
On cloud platforms, nodes often have local SSD or ephemeral disks. These are fast but not durable. Use them for caches or build artifacts, and always expect data loss during node replacement or upgrade.
Observability
To understand ephemeral usage, inspect node allocatable storage and Pod consumption:
kubectl describe node <node> | rg -n \"ephemeral-storage|Allocatable\"
kubectl top pod -A
Inside a Pod, you can check filesystem usage:
kubectl exec -it <pod-name> -- df -h
If you see low free space, reduce cache sizes or move data to PVCs.
Security considerations
Ephemeral volumes live on the node filesystem. Avoid writing secrets or sensitive data into emptyDir unless you are sure the node is trusted. If you must, use memory-backed storage and keep it small.
If you handle regulated data, consider encrypting data before writing to ephemeral storage or avoiding it entirely. Always sanitize temporary data.
When not to use ephemeral storage
- Databases or any data that must survive Pod restarts.
- Audit logs that must be retained.
- User uploads or business-critical files.
Troubleshooting tips
- Pod Pending: node lacks ephemeral storage capacity.
- Evicted: node disk pressure, reduce temp usage.
- Slow IO: node disk is saturated by other workloads.
If evictions keep happening, reduce cache size or move the workload to nodes with more disk. For batch jobs, consider spreading Pods across more nodes to avoid hotspots.
Also check image cache growth on the node. Old container images can consume significant disk space and trigger pressure.
Diagnostic commands:
kubectl describe pod <pod-name>
kubectl get events -A
Practical notes
- Start with a quick inventory:
kubectl get nodes,kubectl get pods -A, andkubectl get events -A. - Compare desired vs. observed state;
kubectl describeusually explains drift or failed controllers. - Keep names, labels, and selectors consistent so Services and controllers can find Pods.
- Document cache locations so engineers know what data is safe to delete.
Quick checklist
- The resource matches the intent you described in YAML.
- Namespaces, RBAC, and images are correct for the target environment.
- Health checks and logs are in place before promotion.
- Ephemeral usage is bounded with limits.
- Eviction behavior is understood and monitored.
Wrap-up: treat ephemeral storage like a budget
If you don’t put a limit on ephemeral storage, the node will put a limit on you – via eviction.
Set bounds, document what’s safe to delete, and watch disk pressure events.