Kubernetes won the orchestrator wars and proceeded to be adopted by thousands of teams that did not need it. The result is a strange industry-wide situation where small teams operate enormous Kubernetes clusters to run three services that would have been fine on a $20 VPS, and the resulting complexity becomes its own full-time job.
This article is the version we wish someone had given us before our first cluster: what Kubernetes actually is, the small set of concepts that matter, and the honest answer to "do we need this?".
What it is, briefly
Kubernetes is a system for running containers on a cluster of machines, with declarative configuration. You describe the desired state ("run 3 copies of this image, expose it on port 80, restart on crash, scale up if CPU goes above 70%"), and Kubernetes makes it so. If a node dies, the workloads on it are rescheduled to other nodes. If the deployment is updated, old containers are gradually replaced with new ones. If a container crashes, it restarts.
It is what runs at Google, Netflix, Stripe, and most large infrastructure organisations. It is also what runs at countless five-engineer startups that did not need it and now have to maintain it.
The honest test
You probably do not need Kubernetes if:
- You have fewer than 10 services in production.
- Your team is fewer than 20 engineers.
- You can fit your production workload on 1–3 machines.
- You are not bottlenecked by deployment frequency.
- No one on your team has prior Kubernetes operational experience.
You probably do need (or significantly benefit from) Kubernetes if:
- You have 20+ services with different scaling profiles.
- Multiple teams ship to production independently.
- You need reliable rolling deployments without coordinating.
- You operate across multiple regions or cloud providers.
- Your traffic patterns demand auto-scaling on multiple dimensions.
If you are in the first list, alternatives are usually better. ECS, App Runner, Cloud Run, Fly.io, and Render all give you container orchestration with a fraction of the operational burden. We genuinely recommend them over Kubernetes for most teams.
The core concepts
cluster state)] Sched[Scheduler] --> API CtrlMgr[Controller Manager] --> API Sched -.->|assigns Pod to Node| Node1[Node 1] Sched -.->|assigns Pod to Node| Node2[Node 2] subgraph Node1 Kubelet1[kubelet] --> Pod1A[Pod A] Kubelet1 --> Pod1B[Pod B] end subgraph Node2 Kubelet2[kubelet] --> Pod2A[Pod C] end
A Kubernetes cluster: the control plane (API server, etcd, scheduler, controller manager) and worker nodes (running kubelet and pods).
The nouns you need:
- Pod: the smallest deployable unit. Usually one container, sometimes a few related ones (a sidecar). Pods are ephemeral — they get replaced.
- Deployment: a declaration that "there should be N pods of this template running". Handles rolling updates, rollbacks, scaling.
- Service: a stable network endpoint for a set of pods. Load-balances across pods. The pods underneath can change; the service IP/DNS is stable.
- Ingress: external HTTP/HTTPS routing into services. Maps URLs to services.
- ConfigMap and Secret: configuration data and credentials, mounted into pods as files or environment variables.
- Namespace: a logical grouping. Often one per team or environment.
You will use these six concepts every day. Kubernetes has hundreds more (StatefulSets, DaemonSets, Jobs, CronJobs, NetworkPolicies, etc.); learn them as you need them.
A minimal deployment
Three YAML files cover most simple services.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels: { app: my-app }
template:
metadata:
labels: { app: my-app }
spec:
containers:
- name: my-app
image: my-registry/my-app:1.0
ports:
- containerPort: 8080
resources:
requests: { cpu: 100m, memory: 128Mi }
limits: { cpu: 500m, memory: 256Mi }
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector: { app: my-app }
ports:
- port: 80
targetPort: 8080
---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port: { number: 80 }Apply with kubectl apply -f .; Kubernetes converges to the declared state. Update the image tag, apply again, rolling deploy happens automatically.
Helm
Writing the same YAML for ten services with slightly different values gets old fast. Helm is the templating system: define a chart (a parameterised set of YAML), supply values, render and apply. Most production Kubernetes runs Helm-managed services.
Helm is also how most third-party software (Postgres, Redis, monitoring stacks) is installed in clusters. helm install prometheus prometheus-community/prometheus deploys a fully configured Prometheus stack in one command.
The hidden complexity
The 20 lines of YAML above are the simple part. The hidden complexity:
- Cluster operations: upgrades, etcd backups, cert rotation, node updates. A managed service (EKS, GKE, AKS) handles most of this; self-hosted is a non-trivial commitment.
- Networking: the CNI plugin (Calico, Cilium, Flannel), service mesh (Istio, Linkerd), ingress controller (NGINX, Traefik). Each is a system to learn.
- Storage: persistent volumes, storage classes, CSI drivers. Stateful workloads in Kubernetes are notoriously fiddly.
- Observability: setting up Prometheus, Grafana, log aggregation, distributed tracing all running on the cluster monitoring the cluster.
- Security: RBAC, NetworkPolicies, Pod Security Standards, image scanning, secrets management.
- Cost. A small managed cluster on EKS / GKE costs $75–100/month just for the control plane, before any nodes.
This is what people mean when they say Kubernetes is "an entire platform team's worth of work". For a small team, the answer is to either pay someone else to run it (managed service) or use a simpler abstraction.
The alternatives
- AWS App Runner / Google Cloud Run / Azure Container Apps: managed container services. Push an image; URL appears. Scale to zero, pay per request. Excellent for "our app is a stateless web service".
- AWS ECS: a more flexible container orchestrator with significantly less operational overhead than Kubernetes. Fargate variant removes node management entirely.
- Fly.io: deploy containers globally with a single config file. Excellent developer experience; opinionated.
- Render, Railway: simpler PaaS-like alternatives. Good for solo developers and small teams.
- Nomad: HashiCorp's orchestrator. Simpler than Kubernetes, smaller ecosystem.
- Plain VPS with systemd: still a perfectly valid choice for many production apps. SSH, install dependencies, write a systemd unit, deploy via git pull.
Frequently Asked Questions
Should I learn Kubernetes for my career?
Yes. Even if your current company does not need it, having Kubernetes literacy is a strong career asset. Spend a weekend setting up a kind / minikube cluster locally and deploying a small app; that is enough to be a useful contributor on a Kubernetes team.
What is the difference between Kubernetes and Docker?
Docker is a way to package and run a single container. Kubernetes is a way to run many containers across many machines with policies (replicas, scaling, restart, networking). Kubernetes runs Docker (or other) container images.
Is GKE / EKS / AKS worth the cost?
For most teams, yes. The managed control plane removes the most operationally painful parts of Kubernetes (etcd, API server upgrades, certificate rotation). The $75–100/month is cheap insurance.
Share your thoughts
Worked with this in production and have a story to share, or disagree with a tradeoff? Email us at support@mybytenest.com — we read everything.