DevOps

Using Docker for Local Development: tips and pitfalls

Using Docker For Local Development Tips And Pitfalls

Introduction

Docker has transformed how developers build, test, and run applications locally. By packaging applications and their dependencies into containers, Docker eliminates environment inconsistencies that plague development teams. The classic “works on my machine” problem becomes a thing of the past when everyone runs identical containers. However, Docker introduces its own complexity and potential pitfalls. This guide covers practical tips for using Docker effectively in local development, common mistakes to avoid, and best practices that will save you time and frustration.

Why Docker for Local Development

Docker provides isolated, reproducible environments that closely mirror production. This consistency reduces debugging time and catches environment-specific issues early. Several key advantages make Docker valuable for development teams:

  • Consistent environments across all team members
  • Simple onboarding for new developers
  • Easy replication of production architecture
  • Isolation of dependencies between projects
  • Quick switching between different versions of databases, runtimes, and tools
  • Simplified cleanup when removing projects

Docker Compose amplifies these benefits by orchestrating multiple containers with a single configuration file. A complete stack including application servers, databases, caches, and message queues starts with one command.

Essential Docker Compose Tips

Docker Compose simplifies multi-container workflows. A well-structured compose file makes development seamless.

Use Environment Overrides

Create separate compose files for different environments. Use a base docker-compose.yml for common settings and docker-compose.override.yml for development-specific configurations:

# docker-compose.yml (base)
version: '3.8'
services:
  app:
    build: .
    ports:
      - "3000:3000"

# docker-compose.override.yml (development)
version: '3.8'
services:
  app:
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - DEBUG=true

Docker Compose automatically merges both files. For production, explicitly specify only the base file with docker compose -f docker-compose.yml up.

Define Service Dependencies

Use depends_on with health checks to ensure services start in the correct order:

services:
  app:
    depends_on:
      db:
        condition: service_healthy
  db:
    image: postgres:15
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

This prevents your application from crashing because the database is not ready to accept connections.

Optimizing Bind Mounts

Bind mounts map local directories into containers, enabling live code reloading during development. Changes on your host immediately appear inside the container.

volumes:
  - .:/app
  - /app/node_modules

The second line prevents the container’s node_modules from being overwritten by the host directory. This pattern works for any dependency directory that should remain container-specific.

Handling macOS and Windows Performance

Bind mounts on macOS and Windows suffer from significant performance overhead due to filesystem translation layers. Several strategies mitigate this problem:

Use cached or delegated mounts:

volumes:
  - .:/app:cached

Exclude heavy directories from mounts:

volumes:
  - .:/app
  - /app/node_modules
  - /app/.next
  - /app/dist

Consider file sync tools: Tools like Mutagen synchronize files more efficiently than native Docker volumes on non-Linux systems.

Building Efficient Images

Efficient Docker images speed up builds, reduce disk usage, and accelerate developer onboarding.

Choose Slim Base Images

Large base images slow everything down. Prefer slim or alpine variants:

# Avoid
FROM node:18

# Prefer
FROM node:18-slim

# Even smaller
FROM node:18-alpine

Alpine images are significantly smaller but may require additional configuration for certain native dependencies.

Leverage Layer Caching

Docker caches each layer of your image. Structure your Dockerfile to maximize cache hits by copying files that change less frequently first:

FROM node:18-slim
WORKDIR /app

# Dependencies change less often than source code
COPY package.json package-lock.json ./
RUN npm ci

# Source code changes frequently
COPY . .

CMD ["npm", "start"]

When you modify source code, Docker reuses the cached npm install layer, dramatically reducing build times.

Use Multi-Stage Builds

Multi-stage builds separate build-time dependencies from runtime, producing smaller final images:

# Build stage
FROM node:18-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:18-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/main.js"]

The final image contains only what is needed to run the application.

Managing Data Persistence

Containers are ephemeral by design. Without proper volume configuration, you lose database data when containers stop.

Use Named Volumes for Databases

services:
  db:
    image: postgres:15
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

Named volumes persist data across container restarts and are managed by Docker. They also perform better than bind mounts for database workloads.

Seed Databases Automatically

PostgreSQL and MySQL images automatically run scripts from specific directories on first startup:

services:
  db:
    image: postgres:15
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql

This ensures fresh environments start with necessary schema and seed data.

Common Pitfalls to Avoid

Overusing Containers

Not everything benefits from containerization during development. Running unit tests, linters, and code formatters often performs better outside Docker. Reserve containers for services that genuinely benefit from isolation, such as databases, message queues, and dependent services.

Ignoring .dockerignore

Without a proper .dockerignore file, Docker sends unnecessary files to the build context, slowing builds significantly:

node_modules
.git
*.log
.env.local
dist
coverage
.DS_Store

A well-maintained .dockerignore keeps builds fast and prevents accidentally including sensitive files in images.

Storing Secrets in Images

Never hardcode secrets, API keys, or passwords in Dockerfiles. They become permanently embedded in image layers and remain accessible even if you delete the file in a subsequent layer. Instead, use environment variables or secret management tools:

services:
  app:
    environment:
      - DATABASE_URL=${DATABASE_URL}
    env_file:
      - .env

Forgetting Cleanup

Docker resources accumulate over time, consuming disk space and slowing down operations. Regular cleanup prevents these problems:

# Remove unused containers, networks, and images
docker system prune

# Also remove unused volumes (careful with data!)
docker system prune --volumes

# Remove all stopped containers
docker container prune

# Remove dangling images
docker image prune

Consider running docker system prune weekly or adding it to your project cleanup scripts.

Ignoring Resource Limits

Containers without resource limits can consume all available CPU and memory, freezing your development machine. Set reasonable limits:

services:
  app:
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '0.5'

Not Matching Production

Development containers should mirror production as closely as possible. Use the same database versions, runtime versions, and base images. Differences between environments create subtle bugs that only appear in production.

Debugging in Containers

Effective debugging is essential for productive development.

Accessing Container Shells

# Access a running container
docker exec -it container_name /bin/sh

# View container logs
docker logs -f container_name

# Inspect container details
docker inspect container_name

Enable Remote Debugging

Configure your application for remote debugging and expose the debug port:

services:
  app:
    ports:
      - "3000:3000"
      - "9229:9229"  # Node.js debug port
    command: node --inspect=0.0.0.0:9229 src/main.js

Then attach your IDE debugger to localhost:9229.

Best Practices Summary

  • Use Docker Compose for multi-service projects
  • Implement health checks for service dependencies
  • Optimize bind mounts for performance on macOS and Windows
  • Structure Dockerfiles for maximum layer caching
  • Use slim base images and multi-stage builds
  • Persist database data with named volumes
  • Maintain a comprehensive .dockerignore file
  • Never store secrets in images
  • Clean up resources regularly
  • Set resource limits to prevent system slowdowns
  • Match production environments as closely as possible
  • Document your Compose workflows for team members

Conclusion

Docker transforms local development by providing consistent, reproducible environments that mirror production. When used correctly, it accelerates onboarding, reduces environment-related bugs, and simplifies complex multi-service architectures. However, ignoring best practices leads to slow builds, disk bloat, and frustrated developers. By following the tips in this guide and avoiding common pitfalls, you will create a smooth, efficient development workflow that scales with your team.

For orchestrating complex multi-service setups, read Docker Compose for Local Development: Orchestrating Services. To deploy your containers to production, see Kubernetes 101: Deploying and Managing Containerised Apps. For Spring Boot containerization, check out Spring Boot Docker Kubernetes Deployment. You can also visit the official Docker documentation for comprehensive reference material.

1 Comment

Leave a Comment