Deployment is the controller most people touch every day. ReplicaSet keeps a replica count stable, while Deployment manages rollout history, version changes, and safer updates for stateless workloads.
What each controller is responsible for
- ReplicaSet: keeps the right number of matching Pods running.
- Deployment: creates and updates ReplicaSets over time.
- Rolling update: replaces old Pods with new ones in a controlled way.
If you remember one line, remember this: you usually manage Deployments, not ReplicaSets directly.
Why Deployment exists
ReplicaSet alone is not enough for normal application delivery. You also need rollout history, rollback, progress tracking, and update strategy. Deployment adds that operational layer on top of ReplicaSet.
That is why deleting Pods by hand is not a release strategy. The controller model expects you to change desired state and let Kubernetes manage the transition.
Minimal example
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: nginx:1.25
The most important rule: selector and labels must match
The Deployment selector must match the Pod template labels. If they drift apart, the controller cannot manage the Pods correctly, and traffic or rollout behavior will become confusing fast.
This is also why editing selectors after creation is almost always a bad idea.
What happens during a rollout
When you change the Pod template, Kubernetes does not update the old ReplicaSet in place.
Instead it:
- creates a new ReplicaSet
- scales the new one up
- scales the old one down
- keeps tracking progress until the rollout is complete
That is why rollout history exists and why old ReplicaSets can still be visible after an update.
Rolling update vs recreate
RollingUpdate
This is the default, and usually the right choice.
Use it when:
- old and new versions can briefly coexist
- you want safer rollout and rollback behavior
- you need traffic continuity
Recreate
This stops old Pods before starting new ones.
Use it only when:
- the workload cannot run mixed versions
- startup order or exclusive locks make overlap unsafe
Rollout commands worth memorizing
kubectl rollout status deploy/api
kubectl rollout history deploy/api
kubectl rollout undo deploy/api
kubectl rollout pause deploy/api
kubectl rollout resume deploy/api
These commands are much more useful during incidents than trying random edits in the live cluster.
Tuning rollout safety
Two fields matter a lot:
maxSurge: how many extra Pods can exist during the rolloutmaxUnavailable: how many current Pods can be unavailable during the rollout
For critical APIs, maxUnavailable: 0 is a common safe starting point.
Why probes matter to Deployments
Deployment safety depends heavily on readiness. If readiness is wrong, Kubernetes may treat broken Pods as healthy or keep healthy Pods out of Service endpoints.
That is why rollout problems often look like application problems, even though the real issue is probe behavior.
Image tags and revision history
Use versioned image tags. Avoid latest for real rollouts.
With a fixed version tag, rollout history makes sense. With mutable tags, it becomes harder to answer basic questions like:
- what changed?
- which revision introduced the problem?
- what exactly am I rolling back to?
Resource requests are part of release safety
If requests are missing, the scheduler has weaker placement signals, HPA becomes less meaningful, and noisy-neighbor behavior gets worse.
Release strategy is not just about version change. It also depends on whether the cluster can place and run the new Pods reliably.
Deployment vs StatefulSet
Use Deployment when:
- replicas are interchangeable
- Pods do not need stable identity
- storage is disposable or externalized
Use StatefulSet when:
- each replica needs its own stable identity
- each replica needs its own persistent storage
- startup and rollout order matter
This is one of the most important controller boundaries to understand early.
Common rollout failure modes
Rollout is stuck
Common causes:
- readiness probe never succeeds
- image pull fails
- requests cannot be scheduled
maxUnavailableand replica count are too strict for current capacity
Pods are healthy but traffic fails
Often this is not a Deployment bug at all. Check whether the Service selector still matches the Pod labels.
Old version keeps serving traffic
This can happen if the rollout never finishes or if some endpoints still point to old Pods because readiness or termination is slow.
A practical troubleshooting sequence
kubectl describe deploy api
kubectl get rs
kubectl describe rs <replicaset>
kubectl get pods -l app=api
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl get events --sort-by=.metadata.creationTimestamp
Start at the Deployment, then move down to ReplicaSet, Pods, and events.
What to check before a release
- selectors and labels match
- image tag is explicit
- readiness reflects real availability
- requests and limits are reasonable
- rollback command is clear
- Service compatibility is still valid
FAQ
Q: Should I edit ReplicaSets directly? A: Usually no. ReplicaSets are rollout artifacts owned by the Deployment, so direct edits often create confusion and drift.
Q: Why does a new Deployment revision create a new ReplicaSet? A: Because Kubernetes treats Pod template changes as a new version boundary. A new ReplicaSet lets the Deployment manage rollout history and rollback safely.
Q: Why does a rollout succeed in staging but fail in production? A: Production often has tighter capacity, stricter probes, different traffic patterns, or Service dependencies that were not exercised in staging.
Next reading
- Continue with
kubernetes-quickstart-probes.mdto make rollouts safer. - Read
kubernetes-quickstart-service.mdto understand traffic exposure. - For safer production updates, continue into canary and rollout tips.
Wrap-up
Deployment is not just a scaling primitive. It is your basic release controller. If you understand rollout state, ReplicaSet history, and readiness behavior, many “mysterious” release issues stop being mysterious.