DevOps

Docker Basics: Complete Guide to Containerization

📅 December 05, 2025 ⏱️ 5 min read 👁️ 121 views 🏷️ DevOps

Docker completely changed how I approach building and deploying applications. Before containers, I regularly dealt with environment related bugs that appeared only after deployment. Docker did not remove every problem, but it eliminated the uncertainty around where an application runs. What follows is a practical explanation of Docker, based on real usage, common mistakes I faced, and how those issues were resolved in production environments.

What Docker Is and Why It Matters

Docker is an open source containerization platform that packages applications together with their dependencies. Containers share the host operating system kernel, which makes them far more lightweight than traditional virtual machines. In practice, this means faster startup times and lower resource usage.

The biggest improvement I noticed after adopting Docker was consistency. The same container image ran on my local machine, staging servers, and production servers without changes. This behavior aligns with Docker’s design goals as described in the official Docker documentation at https://docs.docker.com.

Practical Benefits Observed in Real Projects

  • Reliable behavior across development and production environments
  • Isolation between services running on the same host
  • Easy portability between servers and cloud providers
  • Lower memory usage compared to virtual machines
  • Faster deployments and rollbacks
  • Strong compatibility with microservice architectures
  • Smooth integration with CI CD pipelines

Installing Docker on Development and Server Systems

I have installed Docker on Linux servers, macOS laptops, and Windows machines. Linux installations are common in production and are well documented by Docker itself.


sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl enable docker
sudo usermod -aG docker $USER

docker --version
docker run hello-world

The most common error I faced here was permission denied when running Docker commands. This happens when the user is not added to the docker group. Logging out and back in after adding the user resolved it.

Docker Images in Practice

Docker images are immutable templates used to create containers. Images are built in layers, and understanding how layers work helped me reduce build times significantly. Docker’s image layering behavior is documented by Docker and OCI standards.


docker pull nginx:latest
docker pull python:3.11-slim
docker images
docker build -t myapp:1.0 .
docker tag myapp:1.0 username/myapp:1.0
docker push username/myapp:1.0

One mistake I repeatedly made early on was rebuilding images without tags. This caused confusion during deployments. Versioned tags fixed that issue.

Running and Managing Containers Safely

Containers are running instances of images. Managing them carefully is critical in production. I once stopped the wrong container during maintenance, which reinforced the importance of naming containers clearly.


docker run -d -p 8080:80 --name webserver nginx
docker ps
docker logs webserver
docker exec -it webserver /bin/bash
docker stop webserver
docker rm webserver

Another frequent issue was forgetting to expose or map ports. Containers appeared healthy but were unreachable from outside the host.

Writing Dockerfiles That Scale Well

A Dockerfile defines how an image is built. My early Dockerfiles were functional but inefficient. Large images and slow builds forced me to rethink instruction order and base image choices.


FROM python:3.11-slim
WORKDIR /app
ENV PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "app.py"]

Reordering dependency installation steps improved caching and reduced rebuild times. This behavior is consistent with Docker’s documented build cache mechanism.

Using Multi Stage Builds to Reduce Image Size

Multi stage builds allowed me to separate build tools from runtime dependencies. This technique is recommended in Docker’s official best practices.


FROM node:18-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Docker Compose for Multi Service Applications

Once applications grew beyond a single container, Docker Compose became essential. Managing databases, caches, and background workers manually was error prone.


version: "3.8"
services:
  web:
    build: .
    ports:
      - "8000:8000"
    depends_on:
      - db
  db:
    image: postgres:15
    environment:
      POSTGRES_DB: appdb
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass

Startup failures often occurred when services were not ready. Adding dependency definitions reduced those errors during local development.

Persisting Data with Docker Volumes

Containers are ephemeral by default. I lost test data early on because I did not use volumes correctly. Docker volumes are the recommended way to persist data, as documented by Docker.


docker volume create appdata
docker run -v appdata:/var/lib/data myapp

Docker Networking in Real Environments

Docker networking allows containers to communicate using service names instead of IP addresses. This simplified service discovery and removed the need for hard coded values.


docker network create appnet
docker run -d --network appnet --name api myapi
docker run -it --network appnet alpine ping api

Security Practices I Apply in Production

Running containers securely is essential. I initially ran containers as root, which increased risk unnecessarily. Docker security recommendations strongly advise using non root users.


FROM python:3.11-slim
RUN useradd -r appuser
USER appuser
WORKDIR /app
COPY . .
CMD ["python", "app.py"]
  • Use minimal base images
  • Run containers as non root users
  • Scan images for known vulnerabilities
  • Limit CPU and memory usage

Docker in CI CD Pipelines

Docker fits naturally into CI CD workflows. Building and testing inside containers removed environment drift from automated pipelines.


name: Build
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: docker build -t myapp .
      - run: docker run myapp pytest

Troubleshooting Issues I Encountered Most Often


docker logs container_name
docker inspect container_name
docker stats
docker system prune

Another recurring issue involved Docker configuration files written in JSON. When the Docker daemon failed to start, invalid JSON formatting was often the cause. Validating these files using https://jsonformatterspro.com helped identify syntax errors quickly and safely.

Docker is not difficult once the core concepts are understood. Most production issues I encountered came from small configuration mistakes rather than complex bugs. With consistent practices, trusted documentation, and proper validation tools, Docker becomes a reliable foundation for modern application development.

🏷️ Tags:
docker devops containers deployment dockerfile