I keep seeing the same mistakes over and over again. These aren't catastrophic errors that break your application. They're subtle inefficiencies that compound over time. Your builds get slower, your images get bigger, and your deployments take longer, but you don't notice because it happens gradually.
Let me show you the most common Docker mistakes I see in production codebases, and more importantly, how to fix them. Some of these tips might save you seconds per build, others might save you gigabytes of disk space. Either way, they're worth knowing.
Mistake #1: Not Using .dockerignore
This is the easiest mistake to make and the easiest to fix. Without a .dockerignore file, Docker copies your entire project directory into the build context, including things you absolutely don't need in your container.
The Problem:
FROM node:18
WORKDIR /app
COPY . . # This copies EVERYTHING
RUN npm install
This innocent-looking COPY . . command is sending your node_modules, .git directory, build artifacts, test files, and documentation to the Docker daemon. I've seen build contexts exceed 2GB because of this.
The Fix:
Create a .dockerignore file in your project root:
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.local
dist
build
coverage
.vscode
.idea
*.md
.DS_Store
This simple file can reduce your build context from 2GB to 50MB. Your builds will be faster, and you'll avoid accidentally leaking sensitive files into your images.
Mistake #2: Installing Unnecessary Dependencies in Production
I see this constantly in Node.js projects:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install # Installs devDependencies too!
COPY . .
CMD ["node", "index.js"]
That npm install command installs everything, including Jest, Webpack, ESLint, and every other development tool you don't need in production.
The Fix:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production # Skip devDependencies
COPY . .
CMD ["node", "index.js"]
For Python projects, the same principle applies:
# Bad
RUN pip install -r requirements.txt
# Good - use separate requirements files
RUN pip install --no-cache-dir -r requirements-prod.txt
I've seen this single change reduce image sizes by 200-400MB in typical Node.js applications.
Mistake #3: Running as Root User
This is a security issue that's surprisingly common:
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"] # Running as root!
By default, your container runs as root. If an attacker compromises your application, they have root access inside the container.
The Fix:
FROM node:18
# Create app user
RUN groupadd -r appuser && useradd -r -g appuser appuser
WORKDIR /app
COPY --chown=appuser:appuser package*.json ./
RUN npm ci --only=production
COPY --chown=appuser:appuser . .
# Switch to non-root user
USER appuser
CMD ["node", "index.js"]
Or use the built-in node user:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
USER node # Built-in non-root user
CMD ["node", "index.js"]
Mistake #4: Not Using Multi-Stage Builds
Single-stage builds force you to include build tools in your final image:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install # Includes build tools
COPY . .
RUN npm run build # TypeScript, Webpack, etc.
CMD ["node", "dist/index.js"]
Your production image now contains TypeScript, Webpack, and all your build dependencies even though you only need the compiled output.
The Fix:
# Build stage
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install # All dependencies for building
COPY . .
RUN npm run build
# Production stage
FROM node:18-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production # Only runtime dependencies
COPY --from=builder /app/dist ./dist
USER node
CMD ["node", "dist/index.js"]
The builder stage has everything needed to compile your app. The production stage only has the compiled output and runtime dependencies. I've seen this reduce images from 1.2GB to 200MB.
For compiled languages like Go, the difference is even more dramatic:
# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o app
# Production stage
FROM alpine:3.19
WORKDIR /app
COPY --from=builder /app/app .
RUN addgroup -g 1000 appuser && adduser -D -u 1000 -G appuser appuser
USER appuser
CMD ["./app"]
Go from 800MB (golang base image) to 15MB (alpine + binary).
Mistake #5: Not Pinning Base Image Versions
FROM node:latest # Don't do this!
Using latest or unpinned tags means your builds aren't reproducible. Node 18 today might be Node 22 next month, potentially breaking your application in subtle ways.
The Fix:
# Good - specific version
FROM node:18.19.0-alpine3.19
# Even better - use digest for complete immutability
FROM node:18.19.0-alpine3.19@sha256:4c5d...
Pin your versions, but balance this with keeping images updated for security patches. I update my base images quarterly as part of dependency maintenance.
Mistake #6: Installing Packages Without Cleaning Cache
Package managers cache downloads, which you don't need in your final image:
# Bad - leaves cache behind
RUN apt-get update && apt-get install -y curl
# Bad - cache persists across layers
RUN npm install
The Fix:
Chain commands and clean up in the same layer:
# Good - clean up in same RUN command
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Good - use --no-cache flag
RUN npm ci --only=production --no-cache
# For pip
RUN pip install --no-cache-dir -r requirements.txt
Each RUN command creates a layer. If you install packages in one layer and clean up in another, the cache still exists in the first layer, bloating your image.
Mistake #7: Copying Files Before Installing Dependencies
This is the most common layer caching mistake:
# Bad - changes to source code invalidate dependency cache
FROM node:18
WORKDIR /app
COPY . . # This changes frequently
RUN npm install # This rebuilds every time code changes
CMD ["node", "index.js"]
Every time you change a single line of code, Docker rebuilds your dependencies from scratch because the COPY . . layer changed.
The Fix:
Copy dependency files first, install, then copy source code:
# Good - dependency cache stays valid
FROM node:18
WORKDIR /app
# Copy only dependency files
COPY package*.json ./
RUN npm ci --only=production # Only rebuilds if package files change
# Copy source code last
COPY . .
CMD ["node", "index.js"]
This works because Docker caches layers. If package.json hasn't changed, Docker reuses the cached npm install layer instead of re-downloading dependencies.
For Python:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Bonus Tip: Use Slim or Alpine Base Images
Unless you specifically need a full OS, use minimal base images:
# 900MB
FROM node:18
# 170MB - much better
FROM node:18-slim
# 120MB - smallest, but may require extra dependencies
FROM node:18-alpine
Alpine is the smallest but uses musl instead of glibc, which can cause compatibility issues with some packages. Slim variants are usually the sweet spot.
Putting It All Together
Here's a before and after comparison using all these principles:
Before (Common mistakes):
FROM node:latest
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["node", "dist/index.js"]
Build time: 8 minutes | Image size: 1.2GB | Security: Poor
After (Optimized):
# Build stage
FROM node:18.19.0-alpine3.19 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:18.19.0-alpine3.19
WORKDIR /app
RUN addgroup -g 1000 appuser && adduser -D -u 1000 -G appuser appuser
COPY package*.json ./
RUN npm ci --only=production --no-cache
COPY --from=builder --chown=appuser:appuser /app/dist ./dist
USER appuser
CMD ["node", "dist/index.js"]
Build time: 2 minutes (after first build) | Image size: 180MB | Security: Good