People who avoid Docker usually have one of two reasons. The first: they tried it once, the tutorial was confusing, the disk filled up with mysterious images, and the whole thing felt like a solution looking for a problem. The second: they think it is overkill for what they actually do (run a Python script, deploy a Node app to a VPS).
Both reasons are honest, and both fade once the mental model clicks. This article is the introduction we wish someone had given us — what Docker actually is underneath the marketing, when it genuinely helps, and the five commands that cover 90% of real use.
What Docker actually is
Strip away the branding. Docker is two things that work together:
- An image format: a layered tarball that contains a filesystem (your app, its dependencies, a thin Linux base) and metadata (what command to run, environment variables, exposed ports).
- A way to run those images as isolated processes: using Linux namespaces (PID, network, mount, user) and control groups (cgroups) to give each container its own view of the system without the overhead of a VM.
e.g. Alpine Linux] B[Layer 2: Runtime
e.g. Python 3.12] C[Layer 3: App dependencies
e.g. requirements.txt installed] D[Layer 4: Your code] end Image --> Container[Running container
= image + writable layer + isolated process] Container --> Process[Your process
thinks it is on its own machine]
An image is a stack of read-only filesystem layers. A container is the image plus a writable layer plus a running process with isolated namespaces.
The container is not a VM. It shares the host kernel; it is a regular Linux process with restricted views. This is why Docker is fast: starting a container is starting a process, not booting a machine.
Why bother
Three problems Docker solves that nothing else solves cleanly:
Reproducible environments
"It works on my machine" is the phrase Docker exists to retire. The image is the entire environment: same OS, same library versions, same Python interpreter. If it runs on your laptop, it runs identically on the production server — same bytes.
Dependency isolation
Two services need different Python versions. Without containers, you fight with virtualenv, system packages, conflicting library versions. With containers, each service has its own complete environment in its image. They do not see each other.
Deployment unit
The thing you build (an image) is the thing you ship. No "build for production", no "copy these files but not those". Push image; pull image; run image. Same artifact across environments.
The five commands you will use
# 1. Build an image from a Dockerfile in the current directory
docker build -t my-app:1.0 .
# 2. Run a container from an image
docker run -p 8080:8080 my-app:1.0
# 3. List running containers
docker ps
# 4. View logs from a running container
docker logs -f <container_id>
# 5. Open a shell inside a running container (debug)
docker exec -it <container_id> shThat is it. Other commands exist; you reach for them rarely. docker stop, docker rm, docker pull, docker push round out the day-to-day. Everything else is either advanced (compose, networks, volumes for state) or specific to one workflow.
The Dockerfile
A Dockerfile is a recipe for building an image. Each instruction creates a layer.
# Start from a small Python base image
FROM python:3.12-slim
# Set the working directory inside the container
WORKDIR /app
# Copy dependency manifest first (separate layer for caching)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code
COPY . .
# Tell Docker what command to run when the container starts
CMD ["python", "app.py"]
# Document which port the app listens on
EXPOSE 8080The order matters for caching. Docker caches each layer; if a layer changes, all subsequent layers rebuild. Copying requirements.txt separately and installing dependencies in a separate layer means changes to your Python code do not retrigger pip install — massive build-time savings.
docker-compose for multi-container apps
Most real applications are not one container; they are a web server, a database, a Redis, a worker. Running each with docker run commands is tedious. docker-compose defines them all in one file.
# docker-compose.yml
version: '3'
services:
web:
build: .
ports:
- "8080:8080"
environment:
DATABASE_URL: postgres://postgres@db/myapp
REDIS_URL: redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:16
environment:
POSTGRES_DB: myapp
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- db-data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
db-data:One command starts the whole stack: docker-compose up. One command stops it: docker-compose down. Local development becomes painless once you commit to compose.
Common mistakes
- Putting secrets in the Dockerfile. Anything you
COPYorENVinto the image is in the image forever, retrievable by anyone who pulls it. Use environment variables at runtime, or a secret manager. - Running as root. Default Docker images run as root inside the container. If a process is compromised, it has root inside the container, which is bad. Add
USER nonrootto your Dockerfile. - Using
:latesttag in production. Pinning to a specific version (postgres:16.2, notpostgres:latest) is the difference between deterministic deploys and surprise breakage. - Bloated images. Starting from
python:3.12instead ofpython:3.12-slimmeans a 1 GB image instead of 150 MB. Use multi-stage builds for compiled languages. - Persisting data inside the container. Containers are ephemeral. Anything written to the container filesystem is lost when the container exits. Use volumes for data that needs to persist.
When NOT to use Docker
- Single Python or Node script you run locally. Just run it. Docker adds complexity for no benefit.
- Desktop GUI applications. Possible but painful. Use the native installer.
- Embedded firmware or anything close to hardware. Docker is a server tool.
- When the team has zero Docker experience and the project is short. The learning curve costs more than the deployment simplicity saves for genuinely small projects.
Going further
- Multi-stage builds for smaller images: build in one stage, copy artifacts to a smaller runtime stage.
- Health checks in the Dockerfile so orchestrators know when the container is ready.
- Build cache mounting with BuildKit for dramatically faster rebuilds.
- Distroless images (Google's tiny Linux-less base images) for production hardening.
- Container scanning with Trivy or Grype to find CVEs in your dependencies.
Frequently Asked Questions
Is Docker different from Podman?
Functionally similar; Podman is daemonless and rootless by default, which some teams prefer. The CLIs are largely interchangeable. For most users, Docker Desktop on Mac/Windows or Docker Engine on Linux is the default; Podman is a credible alternative when its specific differences matter.
Do I need Docker if I use Kubernetes?
Kubernetes runs container images, so yes — the images you push to your registry are typically built with Docker. Kubernetes itself uses containerd or CRI-O as the runtime; the user-facing build experience is still Docker for most teams.
How much overhead does Docker add?
Negligible CPU overhead. Memory: a few MB of bookkeeping per container. Disk: image layers can add up; clean up unused images periodically withdocker system prune. Network: a small latency penalty (microseconds) on cross-container traffic.Can I use Docker for the database in production?
Technically yes, but most teams use a managed database (RDS, Cloud SQL, Neon) for production. Running your database in a container is fine for development; in production, the operational overhead of running stateful services in containers usually outweighs the benefits.What about WSL2 and Docker on Windows?
Docker Desktop on Windows uses WSL2 under the hood — you are running Linux containers on a Linux kernel inside Windows. Performance is good; integration with Windows files is good. The only common pain point is filesystem performance for very large directories under/mnt/c/.
Share your thoughts
Worked with this in production and have a story to share, or disagree with a tradeoff? Email us at support@mybytenest.com — we read everything.