Build Your Own Docker with Linux Namespaces, cgroups, and chroot: Hands-on Guide

📆 · ⏳ 5 min read · ·

Introduction

Containerization has transformed the world of software development and deployment. Docker ↗️, a leading containerization platform, leverages Linux namespaces, cgroups, and chroot to provide robust isolation, resource management, and security.

In this guide, we’ll skip the theory (go through the attached links above if you want to learn more about the mentioned topics) and jump straight into the practical implementation.

🙅🏻‍♂️

Before we dive into building our own Docker-like environment using namespaces, cgroups, and chroot, it’s important to clarify that this hands-on guide is not intended to replace docker and its functionality.

Docker have features such as layered images, networking, container orchestration, and extensive tooling that make it a powerful and versatile solution for deploying applications.

The purpose of this guide is to offer an educational exploration of the foundational technologies that form the core of Docker. By building a basic container environment from scratch, we aim to gain a deeper understanding of how these underlying technologies work together to enable containerization.

Let’s build Docker

Step 1: Setting Up the Namespace

To create an isolated environment, we start by setting up a new namespace.

We use the unshare command, specifying different namespaces (--uts, --pid, --net, --mount, and --ipc), which provide separate instances of system identifiers and resources for our container.

Terminal window
unshare --uts --pid --net --mount --ipc --fork

Read more in depth about unshare command on man page ↗️

Step 2: Configuring the cgroups

Cgroups (control groups) help manage resource allocation and control the usage of system resources by our containerized processes.

We will create a new cgroup for our container and assign CPU quota limits to restrict its resource usage.

Terminal window
mkdir /sys/fs/cgroup/cpu/container1
echo 100000 > /sys/fs/cgroup/cpu/container1/cpu.cfs_quota_us
echo 0 > /sys/fs/cgroup/cpu/container1/tasks
echo $$ > /sys/fs/cgroup/cpu/container1/tasks

On the first line we create a new directory named container1 within the /sys/fs/cgroup/cpu/ directory. This directory will serve as the cgroup for our container.

On the second line we write the value 100000 to the cpu.cfs_quota_us file within the /sys/fs/cgroup/cpu/container1/ directory. This file is used to set the CPU quota limit for the cgroup.

On the third line we write the value 0 to the tasks file within the /sys/fs/cgroup/cpu/container1/ directory. The tasks file is used to control which processes are assigned to a particular cgroup.

By writing 0 to this file, we are removing any previously assigned processes from the cgroup. This ensures that no processes are initially assigned to the container1 cgroup.

And lastly, on the fourth line we write the value of $$ to the tasks file within the /sys/fs/cgroup/cpu/container1/ directory.

$$ is a special shell variable that represents the process ID (PID) of the current shell or script. By this, we are assigning the current process (the shell or script) to the container1 cgroup.

This ensures that any subsequent child processes spawned by the shell or script will also be part of the container1 cgroup, and their resource usage will be subject to the specified CPU quota limits.

Step 3: Building the Root File System

To create the file system for our container, we use debootstrap to set up a minimal Ubuntu environment within a directory named "ubuntu-rootfs".

This serves as the root file system for our container.

Terminal window
debootstrap focal ./ubuntu-rootfs http://archive.ubuntu.com/ubuntu/

The first argument focal specifies the Ubuntu release to install. In this case, we are installing Ubuntu 20.04 (Focal Fossa) ↗️.

The second argument ./ubuntu-rootfs specifies the directory to install the Ubuntu environment into. In this case, we are installing it into the ubuntu-rootfs directory.

The third argument http://archive.ubuntu.com/ubuntu/ specifies the URL of the Ubuntu repository to use for the installation.

More about debootstrap can be read on the man page ↗️

Step 4: Mounting and Chrooting into the Container

We mount essential file systems, such as /proc, /sys, and /dev, within our container’s root file system.

Then, we use the chroot command to change the root directory to our container’s file system.

Terminal window
mount -t proc none ./ubuntu-rootfs/proc
mount -t sysfs none ./ubuntu-rootfs/sys
mount -o bind /dev ./ubuntu-rootfs/dev
chroot ./ubuntu-rootfs /bin/bash

The first command mounts the proc filesystem into the ./ubuntu-rootfs/proc directory. The proc filesystem provides information about processes and system resources in a virtual file format.

Mounting the proc filesystem in the specified directory allows processes within the ./ubuntu-rootfs/ environment to access and interact with the system’s process-related information.

The next command mounts the sysfs filesystem into the ./ubuntu-rootfs/sys directory. The sysfs filesystem provides information about devices, drivers, and other kernel-related information in a hierarchical format.

Mounting the sysfs filesystem in the specified directory enables processes within the ./ubuntu-rootfs/ environment to access and interact with system-related information exposed through the sysfs interface.

Finally we bind the /dev directory to the ./ubuntu-rootfs/dev directory. The /dev directory contains device files that represent physical and virtual devices on the system.

By binding the /dev directory to the ./ubuntu-rootfs/dev directory, any device files accessed within the ./ubuntu-rootfs/ environment will be redirected to the corresponding devices on the host system.

This ensures that the processes running within the ./ubuntu-rootfs/ environment can interact with the necessary devices as if they were directly accessing them on the host system.

Once we have mounted the necessary file systems, we use the chroot command to change the root directory to the ./ubuntu-rootfs/ directory. Think of this as doing docker exec into the container.

Step 5: Running Applications within the Container

Now that our container environment is set up, we can install and run applications within it.

In this example, we install Nginx web server to demonstrate how applications behave within the container.

Terminal window
(container) $ apt update
(container) $ apt install nginx
(container) $ service nginx start

Conclusion

In this guide, we built a basic Docker-like environment using Linux namespaces, cgroups, and chroot. We explored the code and command examples to gain a deeper understanding of how these technologies work together to create isolated and efficient containers.

You may also like

  • Mount a drive permanently with fstab in Linux

    Let's see how to mount a drive permanently in Linux using the fstab file which will mount the drive automatically on boot.

  • Setup Jellyfin with Hardware Acceleration on Orange Pi 5 (Rockchip RK3558)

    Recently I moved my Jellyfin to an Orange Pi 5 Plus server. The Orange Pi 5 has a Rockchip RK3558 SoC with integrated ARM Mali-G610. This guide will show you how to set up Jellyfin with hardware acceleration on the Orange Pi 5.

  • Optimizing cAdvisor for Lower CPU Usage

    So you are seeing high CPU usage from cAdvisor, well I was too. Here is how I fixed it.