ArrowLeft Icon

Reduce Docker Image Size Like a Pro

📆 · ⏳ 4 min read · · 👀


Ahoy, tech voyager! Have you ever wondered how to make your Docker images slimmer, lighter, and more efficient? Well, you’re in the right place. In this blog, we’re going to uncover the secrets to reducing Docker image sizes without compromising on functionality.

Imagine being able to deploy your applications faster, optimize storage usage, and even save on network transfers. Sounds like magic, right? Let’s explore the art of Docker image size reduction together.

Set Sail with a Minimal Base Image

Every journey begins with the right foundation. Choose a minimal base image that won’t weigh down your ship. Think of Alpine Linux as your trustworthy vessel – it’s compact, secure, and perfect for long journeys.

Instead of opting for a heavyweight image, such as a full-blown Linux distribution, go for the lean and mean option. Your containers will thank you for the lighter load.

# Use a lightweight base image
FROM node:20-alpine

The Art of Multi-Stage Builds

Now, let’s talk about craftsmanship. Multi-stage builds are like having a ship that transforms as needed. Craft your Dockerfile to have multiple stages – one for building and another for running.

This way, you can use the building stage to assemble all your resources, and then seamlessly transition to the runtime stage with only what’s necessary.

# Building stage
FROM node:20 as builder
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Runtime stage
FROM node:20-alpine
COPY --from=builder /app/dist ./dist
CMD [ "node", "dist/index.js" ]

Just like sailing, navigating through the Docker landscape requires finesse. Minimize your layers by combining commands wherever possible.

Each layer adds weight, so it’s best to avoid unnecessary stops along the way. Think of it as a seamless voyage – less time docked, more time sailing smoothly.

For example, Instead of using multiple RUN commands in your Dockerfile, combine them into a single layer. This reduces the number of layers and keeps your image slim.


FROM node:20
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean


FROM node:20
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean

Cache, Cache, Cache Dependencies

Caches are your secret treasures in the Docker realm. They save time and effort by reusing layers from previous voyages.

However, remember that not all treasures are forever. Be mindful of cache invalidation, especially when your dependencies evolve. Sometimes it’s worth taking a brief pause to ensure a smoother journey ahead.

For example, when your package.json hasn’t changed, Docker can reuse the cached layer for npm install.

FROM node:20
COPY package*.json ./
RUN npm install
COPY . .

This copies package.json and package-lock.json, if you are using some other package manager, you might need to change the command accordingly. For example yarn.lock for yarn and pnpm-lock.yaml for pnpm.

Hoist the .dockerignore file

Let’s trim the cargo before setting sail. The .dockerignore file is your compass to exclude unnecessary files from being loaded into your containers.

Only pack what’s essential for the voyage, and leave the rest behind. This keeps your images light, nimble, and ready for adventure.

Example .dockerignore file:


Compress Your Cargo

To make the most of your cargo hold, compress files before loading them aboard. Compressed archives take up less space and unpack quickly upon arrival.

Consider using gzip or other compression tools to ensure your cargo arrives swiftly and efficiently.

FROM node:20
# Compress assets outside of the Docker image
RUN tar -czvf assets.tar.gz assets/
# Copy and decompress assets inside the Docker image
COPY assets.tar.gz .
RUN tar -xzvf assets.tar.gz
# Install dependencies
COPY package*.json ./
RUN npm install
# Copy remaining files
COPY . .
CMD ["npm", "start"]

Optimise the Dockerfile order

Last but not least, plan your journey wisely. Arrange your Dockerfile commands in an optimized order.

Start with commands that change less frequently, such as copying files, before moving on to more dynamic commands like installing dependencies. This way, Docker can reuse cached layers and keep your image build swift and steady.

FROM node:20
# Copy static assets first
COPY assets/ assets/
# Install dependencies
COPY package*.json ./
RUN npm install
# Copy remaining files
COPY . .
CMD ["npm", "start"]


And there you have it, intrepid Docker explorer! With these techniques at your disposal, you’re now equipped to sculpt Docker images that are both efficient and effective.

From embracing minimalism with base images to mastering multi-stage builds and navigating through fewer layers, you’ve earned your captain’s hat in the world of Docker image optimization. So hoist your sails and set forth on a voyage of streamlined containers. Bon voyage and happy coding!

EnvelopeOpen IconStay up to date

Get notified when I publish something new, and unsubscribe at any time.

Need help with your software project? Let’s talk

You may also like

  • # linux# docker

    Build Your Own Docker with Linux Namespaces, cgroups, and chroot: Hands-on Guide

    Take a practical approach to containerization as we guide you through the step-by-step process of building your own Docker-like environment using Linux namespaces, cgroups, and chroot. Dive into the code and command examples to gain a deeper understanding of how these technologies work together to create isolated and efficient containers.

  • # projects# engineering

    I built my own in-house Newsletter system

    Discover how I transformed the need for a newsletter system, sparked by Revue's shutdown, into a fulfilling side project. Dive into the my journey of conceptualizing, breaking down, and building a custom newsletter system that seamlessly integrates with my website's content workflows.

  • # linux

    SystemD Timers vs. Cron Jobs

    Explore the world of task scheduling in Linux as we compare the classic Cron Jobs with the modern SystemD Timers. Learn when to use each method and how to set them up to automate your Linux system tasks effectively.