Make an Infinite Sleep Program in Only 4KB

In my network configuration, some of my Docker containers, for example DNS, need to achieve high availability with Anycast. In my previous post , I created a Busybox container and run tail -f /dev/null , in order to let it persist infinitely, but without using any CPU cycles, to maintain a network namespace used by both the server application and BIRD. In short: I invented a Pod in Kubernetes on my own. I don't use K8S, since my nodes run individually rather than in a cluster, I don't need the cluster functionality of K8S at all. In addition, K8S is difficult to set up. But on another thought, a Busybox container seems like an overkill for this purpose, and I have to set the entrypoint manually. It would be great if I have a tiny Docker image that only sleeps indefinitely. Plan A:...

Static Build Tiny Docker Images

What's stored in Docker images can be seen as numerous tiny Linux systems. Most of them are based on Debian, Ubuntu, or Alpine, with extra software installed on top. Using a complete Linux distribution as the basis gives the benefit of having commonly used commands available, such as ls and cat . They are often used in the image-building process. In addition, they have comprehensive libraries of software packages, allowing users to create images that "just work" with apt-get . However, as soon as the image is built, these utilities become unnecessary burdens for disk space. In addition, a full Linux contains a service managing daemon, like SystemD or OpenRC, useless for Docker containers running only one program at a time. Although Docker images are "overlaid",...

x32 ABI and Docker Containers

History of x86 & x86_64, and x32 ABI Most of the personal computers and servers we use nowadays use the x86_64 architecture, whose specification was released by AMD in 2000 and the first processor released in 2003. Since x86_64 is a 64-bit architecture, in x86_64, each register in the CPU can hold 64 bits of data (or 8 bytes). Before x86_64 went popular, most computers used Intel processors, and the corresponding x86 architecture / ISA, a 32-bit architecture whose registers hold 32 bit of data (or 4 bytes). One significant improvement of the 64-bit architecture is the improved memory addressing ability. Computers usually follow such a routine while accessing the memory: write the memory address to be accessed into a register,...

Sharing Network Namespace Among Docker Containers for Bird Anycasting

At exactly one year ago, I set up an Anycast service with Docker in the DN42 network . Back then, I customized the container's image and added a Bird installation to it, then put in a config file to broadcast Anycast routes via OSPF. However, as time went by, a few problems were exposed: The process of installing Bird takes time. Instead of installing Bird with apt-get , since my Dockerfiles need to support multiple architectures , and Bird isn't available in some architecture's repos for Debian. And since my building server is AMD64, and is running images of other architectures with qemu-user-static , a lot of instruction translation is needed in the image building and software compilation progress, which is extremely inefficient....

Using GPP to Preprocess Dockerfile for #include, #if, and Other Features

Since I have multiple devices with different architectures running Docker (including x86_64 computers and servers, ARM32v7 Tinker Board, and ARM64v8 Raspberry Pi 3B), each of my Docker images needs to be built in multiple versions. Initially, I wrote a separate Dockerfile for each architecture , but this approach proved difficult to manage uniformly, often leading to missed updates when modifying Dockerfiles during software upgrades. Later, I adopted Docker's build argument feature , using the --build-arg parameter to select different base images and download architecture-specific files based on arguments. However, this approach still has significant limitations. First, different projects use varying naming conventions for architectures. For example, the x86 32-bit architecture (i386)...

Illustration for Using Docker Build Args to Share a Single Dockerfile Across Multiple Architectures

Using Docker Build Args to Share a Single Dockerfile Across Multiple Architectures

Since I have multiple architecture devices running Docker (including x86 servers, Raspberry Pi, Tinker Board), for each commonly used software, I need to build an image for each different architecture . Previously, my approach was to maintain a separate Dockerfile for each architecture, similar to this : You can see that each Dockerfile is almost identical except for the base image referenced in the FROM instruction. While this management method simplifies writing build scripts (travis.yml) by allowing direct docker build commands for each, the drawback is obvious: every time the software version updates or I decide to add/remove a feature, I have to modify multiple Dockerfiles. Two days ago while researching, I discovered a Docker feature: Build Args,...

Building ARM Docker Images on x86, Automated Builds with Docker Hub and Travis

Typically, Docker images are created by running specified commands step-by-step within an existing image. This process poses no issues for most users on x86 architecture computers, as the architectures are compatible. Images built on one machine can usually run directly on others, unless the programs within the image use newer instruction sets like AVX. However, there are ARM-based hosts that can run Docker and execute specially compiled ARM architecture images. These include Raspberry Pi series and similar boards like Cubieboard, Orange Pi, Asus Tinker Board, etc. Additionally, hosting providers like Scaleway offer ARM-based dedicated servers. Since ARM architecture systems cannot run on x86 computers, you can't directly generate ARM architecture images via Dockerfile on x86 machines....

Optimizing Docker Image Size

Since switching from OpenVZ-based VPS to KVM-based VPS, I've been using Docker to deploy essential services like nginx, MariaDB, and PHP for my websites. This approach not only simplifies restarting and managing configurations for individual services (by mapping all configuration directories together using volumes) but also streamlines service upgrades. For example, my blog's VPS has limited resources, with memory usage consistently around 80% recently. When updating nginx or adding modules, compiling directly on this VPS would be slow and risk crashing the site due to insufficient memory. With Docker, I can build images on other resource-rich VPS machines or my local computer, push them to Docker Hub, then pull and run them on the production VPS. However,...

nginx: TLS 1.3 Multi-Draft Support and HPACK

It has been 11 months since I last enabled TLS 1.3 for nginx. After nearly a year, many nginx-related programs and patches have undergone significant changes: OpenSSL has released beta versions of 1.1.1, with the latest being 1.1.1-pre8 (Beta 6) at the time of writing. nginx has been updated to version 1.15.1. Bugs in nginx's HPACK patch (HTTP header compression) have been fixed by subsequent patches. Using the original HPACK patch causes abnormal website access, manifesting as protocol errors when attempting to load subsequent pages after the first. A developer has released an OpenSSL patch enabling the latest OpenSSL to simultaneously support TLS 1.3 draft versions 23, 26, and 28. Lets Encrypt certificates now include Certificate Transparency information by default,...

Illustration for Establishing Dual-Stack Intercommunication Network Between Multiple Docker Servers Using ZeroTier One

Establishing Dual-Stack Intercommunication Network Between Multiple Docker Servers Using ZeroTier One

Preface Achieving intercommunication between containers on multiple Docker servers is a challenging problem. If you build your own overlay network, you need to set up services like etcd on one server. But if the server hosting etcd crashes, the entire network goes down. The cheap VPS I use occasionally experiences network interruptions, and I often accidentally crash servers myself, so this approach isn't feasible for me. Docker also has other commercial overlay networking solutions like Weave, but for individual users, these solutions are too expensive (I'm just experimenting for fun), so they're not considered either. In these network architectures, central servers like etcd or Weave record which server each container is on and its internal IP, allowing DNS resolution to any container....