Reduce Docker Image Size Like a Pro

📆 · ⏳ 4 min read · ·

Introduction

Ahoy, tech voyager! Have you ever wondered how to make your Docker images slimmer, lighter, and more efficient? Well, you’re in the right place. In this blog, we’re going to uncover the secrets to reducing Docker image sizes without compromising on functionality.

Imagine being able to deploy your applications faster, optimize storage usage, and even save on network transfers. Sounds like magic, right? Let’s explore the art of Docker image size reduction together.

Set Sail with a Minimal Base Image

Every journey begins with the right foundation. Choose a minimal base image that won’t weigh down your ship. Think of Alpine Linux as your trustworthy vessel – it’s compact, secure, and perfect for long journeys.

Instead of opting for a heavyweight image, such as a full-blown Linux distribution, go for the lean and mean option. Your containers will thank you for the lighter load.

# Use a lightweight base image
FROM node:20-alpine

The Art of Multi-Stage Builds

Now, let’s talk about craftsmanship. Multi-stage builds are like having a ship that transforms as needed. Craft your Dockerfile to have multiple stages – one for building and another for running.

This way, you can use the building stage to assemble all your resources, and then seamlessly transition to the runtime stage with only what’s necessary.

# Building stage
FROM node:20 as builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Runtime stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
CMD [ "node", "dist/index.js" ]

Just like sailing, navigating through the Docker landscape requires finesse. Minimize your layers by combining commands wherever possible.

Each layer adds weight, so it’s best to avoid unnecessary stops along the way. Think of it as a seamless voyage – less time docked, more time sailing smoothly.

For example, Instead of using multiple RUN commands in your Dockerfile, combine them into a single layer. This reduces the number of layers and keeps your image slim.

Before:

FROM node:20
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean

After:

FROM node:20
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean

Cache, Cache, Cache Dependencies

Caches are your secret treasures in the Docker realm. They save time and effort by reusing layers from previous voyages.

However, remember that not all treasures are forever. Be mindful of cache invalidation, especially when your dependencies evolve. Sometimes it’s worth taking a brief pause to ensure a smoother journey ahead.

For example, when your package.json hasn’t changed, Docker can reuse the cached layer for npm install.

FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
đź’ˇ

This copies package.json and package-lock.json, if you are using some other package manager, you might need to change the command accordingly. For example yarn.lock for yarn and pnpm-lock.yaml for pnpm.

Hoist the .dockerignore file

Let’s trim the cargo before setting sail. The .dockerignore file is your compass to exclude unnecessary files from being loaded into your containers.

Only pack what’s essential for the voyage, and leave the rest behind. This keeps your images light, nimble, and ready for adventure.

Example .dockerignore file:

node_modules
npm-debug.log
Dockerfile
.dockerignore

Compress Your Cargo

To make the most of your cargo hold, compress files before loading them aboard. Compressed archives take up less space and unpack quickly upon arrival.

Consider using gzip or other compression tools to ensure your cargo arrives swiftly and efficiently.

FROM node:20
WORKDIR /app
# Compress assets outside of the Docker image
RUN tar -czvf assets.tar.gz assets/
# Copy and decompress assets inside the Docker image
COPY assets.tar.gz .
RUN tar -xzvf assets.tar.gz
# Install dependencies
COPY package*.json ./
RUN npm install
# Copy remaining files
COPY . .
CMD ["npm", "start"]

Optimise the Dockerfile order

Last but not least, plan your journey wisely. Arrange your Dockerfile commands in an optimized order.

Start with commands that change less frequently, such as copying files, before moving on to more dynamic commands like installing dependencies. This way, Docker can reuse cached layers and keep your image build swift and steady.

FROM node:20
WORKDIR /app
# Copy static assets first
COPY assets/ assets/
# Install dependencies
COPY package*.json ./
RUN npm install
# Copy remaining files
COPY . .
CMD ["npm", "start"]

Conclusion

And there you have it, intrepid Docker explorer! With these techniques at your disposal, you’re now equipped to sculpt Docker images that are both efficient and effective.

From embracing minimalism with base images to mastering multi-stage builds and navigating through fewer layers, you’ve earned your captain’s hat in the world of Docker image optimization. So hoist your sails and set forth on a voyage of streamlined containers. Bon voyage and happy coding!

You may also like

  • Resolving Missing Memory Stats in Docker Stats on Raspberry Pi

    Running Docker containers on your Raspberry Pi can be a great way to explore different software and services. However, encountering issues with retrieving memory usage information using docker stats can be frustrating. In this blog post will dive into the common reasons behind missing memory stats and guide you through the troubleshooting steps to resolve them.

  • Safely Backup PostgreSQL Database in Docker / Podman Containers

    In this article, we will learn how to safely backup PostgreSQL database in Docker/Podman containers safely. We will also learn how to restore the database from the backup.

  • Build Your Own Docker with Linux Namespaces, cgroups, and chroot: Hands-on Guide

    Take a practical approach to containerization as we guide you through the step-by-step process of building your own Docker-like environment using Linux namespaces, cgroups, and chroot. Dive into the code and command examples to gain a deeper understanding of how these technologies work together to create isolated and efficient containers.