How a Dangling Docker Volume Ate 40GB of My Server's Disk Space
My server kept running out of disk space and I couldn't figure out why. Turns out Docker was silently hoarding gigabytes of unused volumes, images, and build cache.
r5yn1r4143
2d ago
The Oops Moment
I got a monitoring alert at 2 AM: disk usage at 95%. My app was barely storing anything — just a small PostgreSQL database and a few log files. Where did 40GB go?
After 20 minutes of frantic du -sh commands, I found the culprit: /var/lib/docker/ was consuming a massive chunk of the disk.
What Was Happening
Every time I rebuilt my containers during development and deployment, Docker was keeping:
docker build creates a new one)Docker doesn't automatically clean up after itself. It assumes you might need that data later. Over weeks of deployments, it all adds up fast.
How I Diagnosed It
First, I checked Docker's actual disk usage:
docker system df
Output looked something like this: 87% of images, 87% of volumes, and 100% of build cache were reclaimable. That's almost 37GB of wasted space.
The Fix
Quick cleanup (safe — only removes unused resources)
docker system prune -a --volumes
This removes:
I got back 38GB instantly.
What each flag does
docker system prune — removes stopped containers, dangling images, unused networks-a — also removes images not used by any container (not just dangling ones)--volumes — includes unused volumes (not included by default because volumes often contain data)If you want to be more selective
Remove only dangling images (untagged):
docker image prune
Remove only stopped containers:
docker container prune
Remove only unused volumes (careful — this deletes data!):
docker volume prune
Remove only build cache:
docker builder prune
Preventing It From Happening Again
1. Set up a cron job for weekly cleanup
# Edit crontab
crontab -eAdd this line — runs cleanup every Sunday at 3 AM
0 3 0 docker system prune -af --volumes >> /var/log/docker-cleanup.log 2>&1
2. Use --rm when running temporary containers
# Container is automatically removed when it stops
docker run --rm -it ubuntu bash
3. Limit build cache size in Docker daemon config
Edit /etc/docker/daemon.json:
{
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "5GB"
}
}
}
Then restart Docker:
sudo systemctl restart docker
4. Use multi-stage builds to reduce image size
# Stage 1: Build
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run buildStage 2: Production (much smaller)
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
This keeps your final image small, so less disk space is consumed over time.
Useful Commands to Monitor Docker Disk Usage
# Check Docker disk usage summary
docker system dfDetailed breakdown
docker system df -vFind the largest images
docker images --format "{{.Repository}}:{{.Tag}} {{.Size}}" | sort -k2 -hFind volumes and their sizes
docker system df -v | grep -A 100 "Local Volumes"
Key Takeaway
Docker never cleans up after itself automatically. Make docker system df part of your regular server health checks, set up automated cleanup, and always use --rm for temporary containers. Your future self (and your 2 AM self) will thank you.
Comments
Sign in to join the discussion.