Skip to main content

Docker and Kubernetes for Beginners: From Zero to Deployment

Hands-on Docker and Kubernetes guide covering containers, Dockerfiles, Compose, pods, services, and deployments for beginners.

Priya Patel
17 min read
Docker and Kubernetes for Beginners: From Zero to Deployment

Think of Docker as a Shipping Container for Your Code

Before shipping containers existed, moving goods internationally was chaos. Every item had a different shape, different fragility requirements, different handling needs. Dockers loaded and unloaded each item individually. Slow. Expensive. Error-prone. Then someone figured out: what if we put everything into standardized metal boxes? Doesn't matter if it's electronics from Shenzhen or coffee beans from Colombia — same box, same crane, same ship, same truck. Suddenly global trade got ten times easier.

Docker does the same thing for software. Your app, its runtime, its libraries, its configuration files — all packed into one standardized container. Doesn't matter if it's running on your laptop, your colleague's machine, a staging server in Mumbai, or a production cluster in AWS us-east-1. Same container. Same result. Everywhere.

And here's the analogy's second half: if Docker is the shipping container, then Kubernetes is the entire port logistics system — tracking which containers go where, making sure enough ships are available, rerouting cargo when a ship breaks down, and scaling up during busy season. Together, they form the backbone of modern application deployment.

I remember feeling completely overwhelmed when I first encountered Docker. Terminology felt alien. Documentation assumed prior knowledge I didn't have. Most tutorials jumped to complex setups without explaining the basics. So here's my attempt to write the tutorial I wished existed back then. We'll start simple and build up gradually. If you're evaluating where to run your containers, our comparison of AWS vs GCP vs Azure for cloud computing will help you choose the right platform.

Part 1: Docker Fundamentals

What Is a Container, Really?

A container is a lightweight, isolated environment that shares the host operating system's kernel but has its own filesystem, processes, network, and resource limits. Think of it as a very efficient virtual machine — but without the overhead of running a separate OS.

That distinction matters.

Containers vs Virtual Machines:

FeatureContainerVirtual Machine
Boot timeSecondsMinutes
SizeMegabytesGigabytes
OSShares host kernelFull guest OS
PerformanceNear-native5-20% overhead
IsolationProcess-levelHardware-level
Resource usageMinimalSignificant
Use caseApplication packagingFull OS isolation

A single server that can run 2-3 VMs can easily run 20-50 containers. That efficiency gap is why containers have taken over the deployment world. It's not even close.

Installing Docker

Docker Desktop is available for Windows, macOS, and Linux. On Ubuntu (which most Indian developers tend to use for servers), you can install Docker Engine directly. If you're not yet comfortable with Linux, our Linux for developers guide will get you up to speed with the command-line skills you need:

# Install Docker on Ubuntu
sudo apt-get update
sudo apt-get install -y docker.io

# Start Docker and enable it on boot
sudo systemctl start docker
sudo systemctl enable docker

# Add your user to the docker group (so you don't need sudo)
sudo usermod -aG docker $USER

# Log out and back in, then verify
docker --version
docker run hello-world

If hello-world runs successfully and prints a greeting message, Docker is working. Congratulations — you've just run your first container. Wasn't so scary, was it?

Images and Containers: Don't Mix These Up

Most beginners trip over this distinction. An image is a blueprint — a read-only template containing instructions for creating a container. A container is a running instance of an image. You can create multiple containers from the same image, just like you can create multiple objects from the same class in OOP. Same recipe, different dishes.

# Pull an image from Docker Hub
docker pull nginx:latest

# Create and run a container from the image
docker run -d -p 8080:80 --name my-web-server nginx:latest

# List running containers
docker ps

# Stop the container
docker stop my-web-server

# Remove the container
docker rm my-web-server

Here, nginx:latest is the image. my-web-server is the container. -d runs it in the background (detached). -p 8080:80 maps port 8080 on your machine to port 80 inside the container. Pretty straightforward once you see it in action.

Writing a Dockerfile

A Dockerfile is a text file that defines how to build an image. Each instruction creates a layer, and Docker caches these layers for fast rebuilds. Let me walk through a practical example — containerising a Node.js Express application:

# Use the official Node.js 20 image as the base
FROM node:20-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy package files first (for better caching)
COPY package.json package-lock.json ./

# Install dependencies
RUN npm ci --only=production

# Copy the rest of the application code
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run the app
CMD ["node", "server.js"]

Why copy package.json before the rest of the code? Docker caches each layer. If your application code changes but package.json doesn't, Docker reuses the cached dependency layer and only rebuilds from the COPY . . step. Installing node_modules might take 30 seconds, but copying application code takes milliseconds. Huge difference during development when you're rebuilding constantly.

Build and run it:

# Build the image (the dot means "use current directory as build context")
docker build -t my-node-app .

# Run a container from the image
docker run -d -p 3000:3000 --name my-app my-node-app

# View logs
docker logs my-app

# Execute a command inside the running container
docker exec -it my-app sh

Docker Compose: When One Container Isn't Enough

Real applications rarely run as a single container. You typically need an app server, a database, a cache, and maybe a reverse proxy. Docker Compose lets you define and run multi-container applications with a single YAML file. One command starts everything.

Here's a docker-compose.yml for a Node.js app with PostgreSQL and Redis:

version: "3.8"

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgresql://postgres:secret@db:5432/myapp
      REDIS_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:
# Start all services
docker compose up -d

# View logs for all services
docker compose logs -f

# Stop everything
docker compose down

# Stop and remove volumes (deletes data!)
docker compose down -v

Notice how services reference each other by name (db, cache). Docker Compose creates a network where containers can communicate using service names as hostnames. Your app connects to PostgreSQL at db:5432 and Redis at cache:6379 — no IP addresses needed. Clean. Simple.

Docker Best Practices You'll Thank Yourself For Later

Before we move on to Kubernetes, here are some practices that'll save you pain down the road:

  1. Use specific image tags, not latest. Instead of FROM node:latest, use FROM node:20.11-alpine. latest can change unexpectedly and break your build. I've seen this happen. Not fun.

  2. Use multi-stage builds to reduce image size:

# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]

Produces a much smaller final image because build tools, source code, and dev dependencies are discarded. Your production image only contains what it needs to run.

  1. Don't run as root. Add a non-root user in your Dockerfile:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
  1. Use .dockerignore to exclude unnecessary files (node_modules, .git, tests) from the build context.

  2. Keep images small. Use Alpine-based images when possible (node:20-alpine is ~50MB vs node:20 at ~350MB). Seven times smaller. Why wouldn't you?

Part 2: Kubernetes — Orchestration at Scale

Docker's excellent for running a few containers on a single machine. But what happens when your application needs to run on multiple servers? When you need automatic scaling based on traffic? When a container crashes at 3 AM and needs to be restarted without anyone waking up? When you need to deploy a new version without downtime?

That's where Kubernetes enters the picture.

What Is Kubernetes?

Kubernetes (often abbreviated as K8s — "K" + 8 middle letters + "s") is a container orchestration platform. It manages the deployment, scaling, and operation of containerised applications across a cluster of machines.

Originally designed by Google based on their internal system called Borg. Now maintained by the Cloud Native Computing Foundation (CNCF). It's the industry standard for container orchestration, and it probably will be for years to come.

Core Kubernetes Concepts

I'll walk through these one at a time. Don't worry if they don't all click immediately — they'll make more sense once you see them in action.

Cluster: A set of machines (nodes) that run containerised applications managed by Kubernetes.

Node: A single machine in the cluster. Nodes can be physical servers or virtual machines. Two types:

  • Control plane node: Runs the Kubernetes management components (API server, scheduler, controller manager)
  • Worker node: Runs your application containers

Pod: Smallest deployable unit in Kubernetes. A pod wraps one or more containers that share storage and network. In most cases, a pod runs a single container. One pod, one job.

Service: An abstraction that defines how to access a set of pods. Services provide stable IP addresses and DNS names, load balancing across pods, and service discovery.

Deployment: A declarative configuration describing the desired state for your pods — how many replicas, which image to use, resource limits, update strategy. Kubernetes continuously works to make actual state match desired state. You say "I want 3 copies running." Kubernetes makes it happen.

Namespace: A way to divide cluster resources between multiple users or teams. Think of it as virtual clusters within a physical cluster.

Setting Up Minikube for Local Development

Minikube runs a single-node Kubernetes cluster on your local machine. Perfect for learning and development. You won't need a cloud account or any infrastructure — just your laptop.

# Install minikube (Linux)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Start a cluster
minikube start

# Verify the cluster is running
kubectl cluster-info
kubectl get nodes

# Enable useful addons
minikube addons enable dashboard
minikube addons enable metrics-server

kubectl is the command-line tool for interacting with Kubernetes clusters. You'll use it constantly. Might want to set up tab completion for it — saves a lot of typing.

Deploying Your App to Kubernetes

Let's deploy the Node.js application we containerised earlier. First, push the Docker image to a registry (Docker Hub, GitHub Container Registry, or a private registry):

# Tag the image for Docker Hub
docker tag my-node-app yourusername/my-node-app:1.0.0

# Push to Docker Hub
docker push yourusername/my-node-app:1.0.0

Now create a Kubernetes Deployment:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-node-app
  labels:
    app: my-node-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-node-app
  template:
    metadata:
      labels:
        app: my-node-app
    spec:
      containers:
        - name: my-node-app
          image: yourusername/my-node-app:1.0.0
          ports:
            - containerPort: 3000
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          readinessProbe:
            httpGet:
              path: /api/health
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /api/health
              port: 3000
            initialDelaySeconds: 15
            periodSeconds: 20

And a Service to expose it:

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-node-app-service
spec:
  selector:
    app: my-node-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
  type: LoadBalancer

Apply both configurations:

# Apply the deployment
kubectl apply -f deployment.yaml

# Apply the service
kubectl apply -f service.yaml

# Check the status
kubectl get deployments
kubectl get pods
kubectl get services

# Watch pods come up in real-time
kubectl get pods -w

What just happened? You created three replicas of your application, each running in its own pod. A Service load-balances incoming traffic across all three pods. If a pod crashes, Kubernetes automatically creates a new one to maintain the desired count of three. No human intervention needed. That's the magic.

Scaling Your Application

Scaling is almost embarrassingly simple:

# Scale to 5 replicas
kubectl scale deployment my-node-app --replicas=5

# Or use autoscaling based on CPU usage
kubectl autoscale deployment my-node-app --min=3 --max=10 --cpu-percent=70

With autoscaling, Kubernetes monitors CPU usage across your pods and automatically adds or removes replicas to maintain approximately 70% CPU utilisation. Traffic spike during a sale? Scales up. Sunday morning at 4 AM? Scales down. No manual intervention required.

Rolling Updates (Zero-Downtime Deployments)

When you deploy a new version, Kubernetes performs a rolling update by default — it gradually replaces old pods with new ones, making sure some pods are always available to serve traffic. No downtime. Users don't notice a thing.

# Update the image to a new version
kubectl set image deployment/my-node-app my-node-app=yourusername/my-node-app:2.0.0

# Watch the rollout progress
kubectl rollout status deployment/my-node-app

# If something goes wrong, roll back
kubectl rollout undo deployment/my-node-app

By default, Kubernetes replaces 25% of pods at a time, waiting for new pods to pass their readiness probes before proceeding. Messed something up? rollout undo takes you back to the previous version in seconds. I think this alone justifies learning Kubernetes for anyone running production services.

Monitoring with kubectl

Here are the kubectl commands you'll use most frequently:

# List all resources in the default namespace
kubectl get all

# Describe a specific pod (detailed info including events)
kubectl describe pod my-node-app-abc123

# View logs from a pod
kubectl logs my-node-app-abc123

# Stream logs in real-time
kubectl logs -f my-node-app-abc123

# Execute a command inside a pod
kubectl exec -it my-node-app-abc123 -- sh

# View resource usage
kubectl top pods
kubectl top nodes

# Delete a resource
kubectl delete pod my-node-app-abc123
kubectl delete -f deployment.yaml

Part 3: When to Use Docker Alone vs Kubernetes

Not every application needs Kubernetes. In fact, Kubernetes adds significant operational complexity that isn't justified for smaller deployments. I've seen teams adopt it way too early and spend more time wrangling infrastructure than building features.

Use Docker (Without Kubernetes) When:

  • You're running a small number of containers (fewer than 10-15)
  • Your application runs on a single server
  • You don't need auto-scaling
  • You're in early development or running personal projects
  • Your team doesn't have Kubernetes expertise
  • Docker Compose handles your orchestration needs

Use Kubernetes When:

  • You need to run your application across multiple servers for reliability
  • You need automatic scaling based on traffic or resource usage
  • You require zero-downtime deployments
  • You have multiple services that need service discovery and load balancing
  • Your organisation has dedicated DevOps or platform engineering expertise
  • You're running a SaaS product or high-traffic application

A Middle Ground: Managed Container Services

If Kubernetes feels like overkill but Docker Compose feels too simple, several managed services offer something in between:

ServiceProviderComplexityCost
AWS ECS (Fargate)AmazonMediumPay per container
Google Cloud RunGoogleLowPay per request
Azure Container AppsMicrosoftLow-MediumPay per container
RailwayRailwayVery LowPay per resource
Fly.ioFly.ioLowPay per resource

Google Cloud Run deserves special mention. You give it a Docker image, and it handles everything — scaling to zero when idle, scaling up on traffic, HTTPS, custom domains. For many applications, especially those with variable traffic, Cloud Run is probably the ideal deployment target. Seems like it hits the sweet spot for most small-to-medium projects.

Part 3.5: Managed Kubernetes Options

If you do need Kubernetes but don't want to manage the control plane yourself, every major cloud provider offers managed Kubernetes:

ServiceProviderStarting CostBest For
EKSAWS~$73/month (control plane)AWS-heavy shops
GKEGoogle CloudFree tier availableBest managed K8s experience
AKSAzureFree (control plane)Microsoft/Azure shops

GKE (Google Kubernetes Engine) is generally considered the best managed Kubernetes experience — which makes sense, since Google created Kubernetes in the first place. It offers an Autopilot mode that manages node pools automatically, reducing operational overhead even further.

For Indian startups, GKE Autopilot or AWS ECS Fargate are usually the most practical choices. They let you focus on your application instead of managing infrastructure. Arguably the best money you'll spend on your stack.

Common Pitfalls and How to Avoid Them

After years of working with Docker and Kubernetes, here are the mistakes I see most often. Some of these I've made myself, so don't feel bad if any sound familiar.

  1. Using latest tags in production. Always pin your image versions. my-app:latest today might be a completely different image tomorrow. I've watched a production deploy fail because someone pushed a broken image to latest while another team was deploying.

  2. Not setting resource limits. A single pod without memory limits can consume all available memory on a node and crash other pods. Always set requests and limits. Always.

  3. Storing secrets in plain text. Never put passwords, API keys, or certificates in your Dockerfile or deployment YAML. Use Kubernetes Secrets or external secret management tools like HashiCorp Vault. Seems obvious, but you'd be surprised how often this happens.

  4. Not using health checks. Without readiness and liveness probes, Kubernetes can't know if your application is actually healthy. A pod might be running but unable to serve traffic — health checks catch this.

  5. Over-engineering early. Starting with Kubernetes for a two-container app is like renting a warehouse for your bicycle. Start with Docker Compose. Graduate to Kubernetes when the complexity justifies it.

  6. Ignoring image size. Large images mean slow deployments, higher storage costs, and larger attack surfaces. Use multi-stage builds and Alpine-based images.

A Practical Learning Path

If this guide has sparked your interest, here's how I'd suggest learning Docker and Kubernetes systematically. Don't rush. Each phase builds on the last.

Week 1-2: Docker basics

  • Install Docker, run some images, learn the CLI
  • Write Dockerfiles for your own projects
  • Use Docker Compose for multi-container setups

Week 3-4: Docker in practice

  • Containerise a real project (not just a hello-world app)
  • Set up a CI/CD pipeline that builds Docker images (GitHub Actions is great for this)
  • Push images to Docker Hub or GitHub Container Registry

Week 5-6: Kubernetes concepts

  • Install Minikube, learn kubectl
  • Deploy a simple app with Deployments and Services
  • Experiment with scaling, rolling updates, and rollbacks

Week 7-8: Kubernetes in practice

  • Deploy a multi-service application
  • Set up Ingress for HTTP routing
  • Learn ConfigMaps and Secrets for configuration management
  • Explore Helm charts for packaging Kubernetes applications

Keep It Simple at First

Here's my honest advice: don't try to learn everything at once. I've seen too many developers get discouraged because they jumped straight into Kubernetes without understanding Docker properly first. That's like trying to manage a fleet of ships when you haven't figured out how to load a single container.

Start with Docker. Get comfortable with it. Build something real — containerise that side project you've been working on, or that app from your last hackathon. Once Docker feels natural and you've hit a genuine limitation (need multiple servers, need auto-scaling, need zero-downtime deploys), then look at Kubernetes.

Infrastructure knowledge might not feel as exciting as learning a new frontend framework. But understanding how your code gets from your editor to a running production environment makes you a dramatically better engineer. Every senior developer I know says the same thing. If containers are part of your path to senior engineering roles, you might also want to work through our guide on system design interview preparation, where containerised architectures come up frequently. Start small. Build up. You'll get there.

Share

Priya Patel

Senior Tech Writer

AI and machine learning specialist with 6 years covering emerging technologies. Previously a senior tech correspondent at TechCrunch India, she now writes in-depth analyses of AI tools, LLM developments, and their real-world applications for Indian businesses.

Stay Ahead in Tech

Get the latest tech news, tutorials, and reviews delivered straight to your inbox every week.

No spam ever. Unsubscribe anytime.

Comments (0)

Leave a Comment

All comments are moderated before appearing. Please be respectful and follow our community guidelines.

Related Articles