Docker Security Best Practices
Container security isn't just about preventing attacks, it's about building a defense-in-depth strategy that protects your applications, data, and infrastructure at every layer. A single misconfigured container can expose your entire host system, making security practices essential rather than optional. This guide will walk you through implementing practical security measures that protect your Docker environment without sacrificing development velocity.
Prerequisites
Before diving into security configurations, make sure you have:
- Docker installed and running (version 20.10 or later recommended)
- Basic experience building and running containers
- Administrative access to configure Docker daemon settings
- A text editor for creating and modifying Dockerfiles
- Access to scan your images (Docker Scout or similar tool)
Understanding the Container Security Model
Container security operates on multiple layers, each requiring specific attention:
Container Security Layers:
┌─────────────────────────────────────┐
│ Application Code │ ← Input validation, secure coding
├─────────────────────────────────────┤
│ Container Runtime │ ← User permissions, capabilities
├─────────────────────────────────────┤
│ Container Image │ ← Base image, dependencies, secrets
├─────────────────────────────────────┤
│ Docker Daemon │ ← Daemon configuration, API access
├─────────────────────────────────────┤
│ Host Operating System │ ← Kernel security, file permissions
└─────────────────────────────────────┘
A vulnerability at any layer can compromise the entire stack, so you need security measures at each level.
Secure Image Foundation
Your security journey starts with the base image. Compromised or outdated base images introduce vulnerabilities before you even add your application code.
Choose Minimal Base Images
Start with the smallest possible base image that meets your needs. Distroless images contain only your application and runtime dependencies:
# Instead of using a full Ubuntu image
FROM ubuntu:22.04 # Contains ~200MB of packages you don't need
# Use distroless for production
FROM gcr.io/distroless/nodejs18-debian11
COPY app.js /app.js
ENTRYPOINT ["node", "/app.js"]
For development environments where you need debugging tools, use Alpine Linux:
# Alpine provides a good balance of size and functionality
FROM node:18-alpine
RUN apk add --no-cache dumb-init
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "app.js"]
Pin Specific Image Versions
Never use latest
tags in production. They can introduce breaking changes or security vulnerabilities without warning:
# Bad: Version can change unexpectedly
FROM node:latest
# Good: Specific version with digest for maximum security
FROM node:18.17.1-alpine@sha256:f77a1aef2da8d83e45ec990f45df50f1a286c5fe8bbfb8c6e4246c6389705c0b
Find image digests using:
# Get the exact digest for an image
docker inspect node:18.17.1-alpine --format='{{index .RepoDigests 0}}'
Scan Images Before Use
Implement vulnerability scanning as part of your image selection process:
# Scan a base image before using it
docker scout quickview node:18.17.1-alpine
# Get detailed vulnerability report
docker scout cves node:18.17.1-alpine
Look for images with fewer critical and high-severity vulnerabilities. Sometimes a slightly older version has fewer known issues than the latest release.
Build Secure Images
How you build your images significantly impacts their security posture. Follow these practices to create hardened images.
Create Non-Root Users
Running containers as root violates the principle of least privilege and increases attack surface:
FROM node:18-alpine
# Create a dedicated user and group
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001 -G nodejs
# Set ownership before switching users
COPY --chown=nextjs:nodejs package*.json ./
RUN npm ci --only=production
COPY --chown=nextjs:nodejs . .
# Switch to non-root user
USER nextjs
EXPOSE 3000
CMD ["npm", "start"]
Verify the user configuration:
# Check what user the container runs as
docker run --rm your-app:latest whoami
docker run --rm your-app:latest id
Multi-Stage Builds for Smaller Attack Surface
Multi-stage builds let you include build tools in intermediate stages while keeping them out of your final image:
# Build stage - includes compilers, dev dependencies
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --include=dev
COPY . .
RUN npm run build && npm prune --production
# Production stage - minimal runtime environment
FROM node:18-alpine AS runtime
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001 -G nodejs
WORKDIR /app
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/package.json ./package.json
USER nextjs
EXPOSE 3000
CMD ["npm", "start"]
This approach can reduce your final image size by 60-80% while eliminating development tools that attackers could exploit.
Keep Secrets Out of Images
Never bake credentials, API keys, or other sensitive data into images. They become visible to anyone with access to the image:
# Bad: Credentials in the image
ENV DB_PASSWORD=supersecret123
# Bad: Even worse - credentials in build arguments show up in history
ARG SECRET_KEY=abc123
ENV SECRET_KEY=$SECRET_KEY
# Good: Expect secrets to be provided at runtime
ENV NODE_ENV=production
# Secrets come from environment variables, files, or secret management
Use build secrets for temporary access during builds:
# Dockerfile
FROM alpine
RUN --mount=type=secret,id=github_token \
TOKEN=$(cat /run/secrets/github_token) && \
# Use token for git operations during build
# Build with secrets
echo $GITHUB_TOKEN | docker build --secret id=github_token,src=- .
Runtime Security Configuration
How you run containers is just as important as how you build them. Apply these runtime restrictions to limit potential damage from compromised containers.
Drop Unnecessary Capabilities
Linux capabilities allow fine-grained control over what containers can do. Drop all capabilities by default and add only what you need:
# Start with no capabilities
docker run --cap-drop=all your-app:latest
# Add specific capabilities if needed
docker run --cap-drop=all --cap-add=NET_BIND_SERVICE your-app:latest
# For web servers that need to bind to port 80
docker run --cap-drop=all --cap-add=NET_BIND_SERVICE nginx:alpine
Common capabilities you might need:
NET_BIND_SERVICE
: Bind to ports below 1024CHOWN
: Change file ownershipDAC_OVERRIDE
: Override file permission checks
Use Read-Only Root Filesystem
Make the container's root filesystem read-only to prevent runtime modification:
# Make root filesystem read-only
docker run --read-only your-app:latest
# Add writable tmpfs for temporary files
docker run --read-only --tmpfs /tmp --tmpfs /var/run your-app:latest
If your application needs to write files, use volumes for persistent data and tmpfs for temporary files:
# Example for an app that writes logs and uploads
docker run --read-only \
--tmpfs /tmp \
--volume app-logs:/var/log/app \
--volume uploads:/var/uploads \
your-app:latest
Implement Resource Limits
Resource limits prevent containers from consuming all available system resources:
# Limit memory and CPU usage
docker run --memory=512m --cpus=1.0 your-app:latest
# Set memory limit with swap accounting
docker run --memory=512m --memory-swap=1g your-app:latest
# Limit process count to prevent fork bombs
docker run --pids-limit=100 your-app:latest
For production workloads, use Docker Compose with resource constraints:
version: '3.8'
services:
app:
image: your-app:latest
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp
- /var/run
Network Security
Container networking creates additional attack vectors that require specific security measures.
Use Custom Networks
Avoid the default bridge network for production workloads. Create custom networks with specific security policies:
# Create an isolated network for your application
docker network create --driver bridge app-network
# Run containers on the custom network
docker run --network app-network --name frontend your-frontend:latest
docker run --network app-network --name api your-api:latest
docker run --network app-network --name database postgres:13
Custom networks provide:
- Better isolation between different applications
- Built-in DNS resolution between containers
- More granular network policies
Implement Network Segmentation
Separate different tiers of your application into different networks:
# Create separate networks for different tiers
docker network create frontend-network
docker network create backend-network
docker network create database-network
# Frontend connects to backend
docker run --network frontend-network your-frontend:latest
docker network connect backend-network frontend-container-name
# Backend connects to database
docker run --network backend-network your-api:latest
docker network connect database-network api-container-name
# Database is isolated
docker run --network database-network postgres:13
This creates a network architecture like:
Internet
↓
┌─────────────┐
│ Frontend │ ← Only exposed to internet
└─────────────┘
↓
┌─────────────┐
│ Backend │ ← Internal communication only
└─────────────┘
↓
┌─────────────┐
│ Database │ ← Most isolated
└─────────────┘
Control Port Exposure
Only expose ports that external clients need to access:
# Bad: Exposes all ports
docker run -P your-app:latest
# Bad: Exposes internal port to all interfaces
docker run -p 5432:5432 postgres:13
# Good: Only expose what's needed
docker run -p 127.0.0.1:8080:8080 your-app:latest
# Good: Use specific interface for database
docker run -p 127.0.0.1:5432:5432 postgres:13
Secrets Management
Proper secrets management prevents credential exposure and enables secure access to external services.
Use Docker Secrets
For Docker Swarm environments, use Docker secrets for sensitive data:
# Create a secret from a file
echo "supersecret123" | docker secret create db_password -
# Create a secret from stdin
docker secret create api_key /path/to/keyfile
# Use secrets in services
docker service create \
--name webapp \
--secret db_password \
--secret api_key \
your-app:latest
Access secrets in your application:
// Read secret from mounted file
const fs = require('fs');
const dbPassword = fs.readFileSync('/run/secrets/db_password', 'utf8').trim();
External Secrets Management
For production environments, integrate with dedicated secrets management tools:
# Using HashiCorp Vault
docker run --rm \
-e VAULT_ADDR=https://vault.company.com \
-e VAULT_TOKEN_FILE=/vault/token \
-v vault-token:/vault \
your-app:latest
# Using AWS Secrets Manager
docker run --rm \
-e AWS_REGION=us-west-2 \
-e SECRET_ARN=arn:aws:secretsmanager:us-west-2:123456789012:secret:prod/db-AbCdEf \
your-app:latest
Environment Variable Security
When you must use environment variables for secrets, take precautions:
# Use a secrets file instead of command line
docker run --env-file secrets.env your-app:latest
# Or pass from host environment without exposure
docker run -e DB_PASSWORD="$DB_PASSWORD" your-app:latest
Never log environment variables that might contain secrets:
# Bad: Might log secrets
RUN env
# Good: Only show specific safe variables
RUN echo "NODE_ENV=$NODE_ENV"
Monitoring and Logging
Security monitoring helps detect and respond to threats quickly.
Container Runtime Monitoring
Monitor container behavior for suspicious activity:
# Monitor syscalls and file access
docker run --security-opt apparmor:unconfined \
--security-opt seccomp:unconfined \
--cap-add SYS_PTRACE \
falcosecurity/falco
Set up logging to capture security events:
# Configure Docker daemon logging
# /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"log-level": "warn"
}
Application Security Logging
Configure your applications to log security-relevant events:
// Example Node.js security logging
const winston = require('winston');
const securityLogger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'security' },
transports: [new winston.transports.File({ filename: 'security.log' })],
});
// Log authentication attempts
app.post('/login', (req, res) => {
securityLogger.info('Login attempt', {
ip: req.ip,
userAgent: req.get('User-Agent'),
timestamp: new Date().toISOString(),
});
});
Docker Daemon Security
Secure the Docker daemon itself to prevent privilege escalation attacks.
Enable User Namespaces
User namespaces map container root to an unprivileged user on the host:
# Configure Docker daemon
# /etc/docker/daemon.json
{
"userns-remap": "default"
}
# Restart Docker daemon
sudo systemctl restart docker
This maps container UID 0 (root) to a high-numbered UID on the host, significantly reducing the impact of container escapes.
Secure Docker API Access
If you expose the Docker API, use TLS and client certificates:
# Generate certificates
dockerd \
--tlsverify \
--tlscacert=ca.pem \
--tlscert=server-cert.pem \
--tlskey=server-key.pem \
-H=0.0.0.0:2376
# Connect with client certificates
docker --tlsverify \
--tlscacert=ca.pem \
--tlscert=cert.pem \
--tlskey=key.pem \
-H=tcp://docker-host:2376 \
version
Regular Security Audits
Implement regular security assessments:
# Docker Bench for Security
docker run --rm --net host --pid host --userns host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /etc:/etc:ro \
-v /usr/bin/containerd:/usr/bin/containerd:ro \
-v /usr/bin/runc:/usr/bin/runc:ro \
-v /usr/lib/systemd:/usr/lib/systemd:ro \
-v /var/lib:/var/lib:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
--label docker_bench_security \
docker/docker-bench-security
This tool checks your Docker installation against CIS Docker Benchmark recommendations.
Vulnerability Management
Establish processes for identifying and addressing vulnerabilities in your container environment.
Automated Scanning Pipeline
Integrate vulnerability scanning into your CI/CD pipeline:
# GitHub Actions example
name: Security Scan
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run vulnerability scan
run: |
docker scout cves myapp:${{ github.sha }}
# Fail build if critical vulnerabilities found
docker scout cves --exit-code myapp:${{ github.sha }}
Continuous Monitoring
Set up ongoing monitoring for new vulnerabilities:
# Schedule regular scans of running containers
#!/bin/bash
# scan-running-containers.sh
for container in $(docker ps --format "{{.Image}}"); do
echo "Scanning $container..."
docker scout cves "$container"
done
Add this script to your cron jobs for regular execution.
Response Procedures
Establish clear procedures for handling security incidents:
- Immediate Response: Stop affected containers if actively exploited
- Assessment: Determine scope and impact of the vulnerability
- Remediation: Update base images and rebuild containers
- Verification: Scan updated images to confirm fixes
- Deployment: Roll out updated containers with minimal downtime
Container security is an ongoing process that requires attention at every stage of your development and deployment pipeline. By implementing these practices systematically, you create multiple layers of defense that significantly reduce your attack surface. Start with the basics like using trusted base images and non-root users, then gradually implement more advanced measures like network segmentation and runtime monitoring as your security maturity grows.
Found an issue?