Kubernetes Architecture
Separate control-plane decisions from node execution.
Think of Kubernetes in two layers: the control plane (decisions) and the nodes (execution).
Control plane
- API Server: the single entry point
- etcd: stores cluster state
- Scheduler: picks the target node
- Controller Manager: keeps desired state true
Node components
- kubelet: runs Pod lifecycle
- Container runtime: containerd or CRI-O
- kube-proxy: service forwarding rules
- CNI/CSI: networking and storage plugins
Simple mental map
kubectl -> apiserver -> etcd
|-> scheduler
|-> controllers
nodes: kubelet + runtime + kube-proxy
Once you know these parts, you can tell whether a failure lives in the control plane or on a node.
Practical notes
- Start with a quick inventory:
kubectl get nodes,kubectl get pods -A, andkubectl get events -A. - Compare desired vs. observed state;
kubectl describeusually explains drift or failed controllers. - Keep names, labels, and selectors consistent so Services and controllers can find Pods.
Quick checklist
- The resource matches the intent you described in YAML.
- Namespaces, RBAC, and images are correct for the target environment.
- Health checks and logs are in place before promotion.
Reading Kubernetes architecture as a system
Kubernetes architecture makes more sense when you see Kubernetes as a set of contracts between people and controllers. You describe intent in YAML, controllers reconcile it, and the cluster keeps trying until reality matches the spec. That mental model is the same whether you are creating a Pod, a Service, or an entire environment. The key is to think in desired state, not imperative steps, and to accept that reconciliation is continuous.
Declarative workflow in practice
A typical workflow is write a manifest, apply it, observe status, then refine. The applied object becomes the source of truth, not the CLI command you typed. When you change a field, you are changing the desired state, and controllers take action to reach it. This encourages idempotent changes, repeatable rollouts, and safe automation because the same manifest can be applied in any environment without manual rework.
Object model and API habits
Every object follows the same shape: apiVersion, kind, metadata, and spec. The metadata block is not busywork. Names, labels, and annotations are how other objects discover and manage your resource. Establish naming conventions, label taxonomy, and ownership references early, because they become the glue for services, selectors, and monitoring queries. A clean object model is the difference between a cluster you can reason about and one that feels unpredictable.
How control loops change your thinking
Control loops keep running, even when you are not watching. That means retries are normal, and eventual consistency is a feature. When a node disappears, schedulers place replacement Pods. When configuration changes, controllers roll out updates. This is why Kubernetes wants you to specify intent rather than sequence, and why most operations are safe to repeat. Design your workflows around observation and rollback instead of manual fixes.
Minimal lab sequence
A small lab makes these ideas concrete. Start by creating a namespace, deploy a simple workload, expose it, and then observe. You can follow a sequence like create or apply, check status, describe for events, and watch logs. Repeat this loop for a few objects to build intuition about the event stream and the pace of reconciliation.
kubectl create namespace demo
kubectl apply -f app.yaml
kubectl get pods -n demo
kubectl describe pod -n demo
kubectl get events -n demo --sort-by=.metadata.creationTimestamp
Common misreads and corrections
Newer users often treat Kubernetes like a PaaS or a fancy VM scheduler. That framing usually causes confusion. Kubernetes is a control system that assumes your app can be restarted. Pods come and go, and that’s normal. Services aren’t “a running process” either; they’re stable virtual entry points that select backends.
One more thing that trips people up: namespaces help you organize and separate resources, but they’re not a security boundary by themselves. If you need real isolation, you’re looking at RBAC, NetworkPolicies, admission rules, and the rest of the security toolbox.
On the ops side, the boring habits pay off fast. Keep manifests in version control. Pick a label scheme you’ll still like six months from now (owner/team/env are a good start). Don’t leave requests/limits empty unless you’re intentionally experimenting. And turn on logs and metrics early—most “Kubernetes issues” end up being “we don’t have enough signals to prove what happened”.
You’ll also run into tradeoffs pretty quickly: Deployment vs StatefulSet, rolling vs recreate, scale out vs scale up. There usually isn’t a single right answer. What helps is writing down why you chose option A today, so future you (or your teammate) isn’t stuck guessing the context.
If you want one paragraph to keep in your head: the API server is the front door, etcd is the memory, the scheduler decides placement, and controllers keep pushing reality toward what you wrote in spec. When something breaks, it’s often faster to ask “did the cluster decide wrong?” vs “did the node fail to carry it out?”
Also, APIs change. Old fields get deprecated. Stick to stable API versions for production and skim release notes before upgrades. It’s a small habit that prevents very loud surprises.
And finally, Kubernetes won’t do your engineering work for you. It doesn’t build images, doesn’t run your CI, and doesn’t magically secure your application. You still need a registry, scans, CI/CD, and app-level monitoring. Kubernetes just makes the runtime side repeatable.
My go-to pre-apply checklist is simple: what’s my desired state, which controller owns it, what’s my failure signal, and what’s the rollback? If the answer is fuzzy, the deployment will be fuzzy too. It also forces the “cleanup” question: what sticks around after delete (PVCs, external resources), and what’s safe to recreate.