More Docker as a cross-compiler

Now that I work in the Silicon Shed, I decided that I needed to have some more Linux in there ... and restore some balance to the universe. So I thought that a good place to start would be with OpenWrt running on my router. I already had a TP-Link WR740n lying around which I had been playing with before. So I went and got the latest version of OpenWrt (which was Chaos Calmer, 15.05) and installed that. Cool! Now I have a nice little router with plenty of features to play with, and it's an embedded Linux box as well.

But having gotten that far, I decided to set up a new cross-compile environment. I had done this before using Virtual Machines, but this time I wanted to use Docker to make it easier to compile my C programs using whatever machine I happen to be sitting in front of. I started by creating a base Docker image with all the files in place to give me a buildroot. In my Dockerfile I used Debian Jessie as the starting point and then added all the files by cloning them from OpenWrt's git repository. That generic image can be found here at davidsblog/openwrt-build-15-05. It's an automated build with the source files coming from GitHub. I also wanted to do the same with the next stage - actually compiling OpenWrt from sources ... but when I tried, DockerHub timed out the build. The OpenWrt build process can take a few hours. So the rest could not be done as an automated build.

I used my base image to create a .config file for the WR740n using the make menuconfig command and then copied that .config file and referenced it in my next Dockerfile. This new Dockerfile takes the base image, adds the config file and then calls make to actually build my specific cross-compile environment for the WR740n. If somebody wanted to make a cross-compiler for a different device they would just need to change the config file for their own device and use docker build to create an image.

So I built the image and pushed it out to DockerHub as davidsblog/openwrt-build-wr740n. As long as you have the bandwidth, it's much easier to be able to pull a pre-configured cross-compiler than to set one up from scratch. And it's really easy to use.

This is how I'm using it: I created a script called /usr/local/bin/740ncc which contains this:

#!/bin/bash
docker run --rm -v ${PWD}:/src davidsblog/openwrt-wr740n:latest \
     /bin/sh -c "cd /src; $*"

So then, on my local machine, I navigate to the folder containing the C sources I want to cross-compile. Now I can type something like 740ncc make and the make command will be routed to a docker container which will do the cross-compilation for the WR740n. The compiled program will be on your local machine (not in the Docker container) just as if you had compiled it locally. I think that's very not-bad. I am also using Dockers --rm parameter so that the container is automatically removed afterwards. Here's an example where I'm building my rCPU monitoring webserver for the TP-Link WR740n:

I also discovered something interesting during all this: using the find command inside the same Docker image but on different machines does not always show the results in the same order. This had me puzzling for a while when I was using the find command in one of my scripts. I used the image on Ubuntu and the order of the find results was different to the same image running on my laptop on Elementary OS. In my experience, on the same machine the order of results from find is the same. I was expecting it to be the same for a container too, but obviously you can't rely on that. Interesting.