Pod is the smallest schedulable unit in Kubernetes. If you understand Pods clearly, many later topics - Deployments, Services, probes, and debugging - become much easier. If you misunderstand Pods, the rest of the platform will keep feeling more magical than it should.

What a Pod really is

A Pod is one or more containers that share:

  • the same network namespace
  • the same Pod IP
  • the same port space
  • optionally shared volumes

Most Pods should have one main container. Multi-container Pods are useful when the containers truly belong to the same unit of operation.

When multiple containers in one Pod make sense

  • sidecar proxy or metrics agent
  • log shipping or file transformation
  • tightly coupled helper process
  • shared emptyDir workflow inside one Pod

If two containers do not need the same lifecycle, they often belong in different Pods.

Minimal example

apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
spec:
  containers:
    - name: app
      image: nginx:1.25
      ports:
        - containerPort: 80
    - name: sidecar
      image: busybox:1.36
      command: ["sh", "-c", "while true; do echo ok; sleep 5; done"]

What usually matters first in real life

Pods are where you first meet:

  • scheduling decisions
  • image pull failures
  • probe failures
  • restarts
  • resource pressure
  • mounted config and storage

That is why Pods are such a useful debugging entry point.

The evidence chain I trust first

When a Pod is wrong, I would usually check these in order:

kubectl get pod -n <ns> -o wide
kubectl describe pod -n <ns> <pod>
kubectl get events -n <ns> --sort-by=.lastTimestamp
kubectl logs -n <ns> <pod> -c <container> --previous

Those commands reveal most common issues without any guesswork.

Common Pod failure shapes

ImagePullBackOff

Usually image name, registry auth, or network access.

CrashLoopBackOff

Often app exit, bad config, missing dependency, or overly aggressive probes.

OOMKilled

Memory limit too low, memory leak, or requests and limits badly tuned.

readiness failures

The container may be running, but Services can still keep it out of endpoints.

Running does not mean ready

This is one of the most important Pod lessons.

A Pod can be:

  • running, but not ready
  • alive, but not safe for traffic
  • healthy enough for liveness, but still failing readiness

That is why Running is not the same thing as “the service is fine”.

Scheduling and resources

Pods do not land on nodes at random. The scheduler looks at:

  • requests and limits
  • taints and tolerations
  • affinity rules
  • topology constraints
  • resource availability

If requests are missing, the cluster has less information to make fair placement decisions.

Pod lifecycle matters

Pods are not permanent machines. They are replaceable workload units.

This affects how you should think about:

  • writing to local filesystem
  • startup and shutdown behavior
  • config loading
  • how state is persisted

If your workload assumes the Pod is immortal, Kubernetes will keep disappointing it.

Security and scope

Pods sit at the point where many policies meet:

  • ServiceAccount identity
  • security context
  • mounted Secrets
  • namespace boundaries
  • NetworkPolicy

That is why Pod debugging often overlaps with auth, config, and runtime security questions.

Practical habits that make Pods easier to operate

  • give Pods clear labels
  • set requests and limits deliberately
  • use readiness and liveness for the right reasons
  • mount configs and Secrets intentionally
  • keep logs accessible

Small discipline here makes every higher-level controller easier to reason about.

FAQ

Q: Should I create Pods directly in production? A: Usually no. Most production workloads should be managed by higher-level controllers such as Deployments or StatefulSets.

Q: Why is my Pod running but my app still unreachable? A: A common cause is readiness failure or Service selector mismatch. Running only means the process exists.

Q: When should I use a multi-container Pod? A: Use it when containers truly share one lifecycle and one local cooperation pattern, not just because two processes are related.

Next reading

  • Continue with kubernetes-quickstart-deployment-replicaset.md to see how Pods are managed at scale.
  • Read kubernetes-quickstart-service.md to understand how traffic reaches Pods.
  • For startup and health behavior, continue with kubernetes-quickstart-probes.md.

Wrap-up

Pods are where Kubernetes becomes concrete. If you can read Pod state well, most higher-level behaviors stop feeling mysterious and start feeling traceable.

References