More Docker as a cross-compiler

Now that I work in the Silicon Shed, I decided that I needed to have some more Linux in there ... and restore some balance to the universe. So I thought that a good place to start would be with OpenWrt running on my router. I already had a TP-Link WR740n lying around which I had been playing with before. So I went and got the latest version of OpenWrt (which was Chaos Calmer, 15.05) and installed that. Cool! Now I have a nice little router with plenty of features to play with, and it's an embedded Linux box as well.

But having gotten that far, I decided to set up a new cross-compile environment. I had done this before using Virtual Machines, but this time I wanted to use Docker to make it easier to compile my C programs using whatever machine I happen to be sitting in front of. I started by creating a base Docker image with all the files in place to give me a buildroot. In my Dockerfile I used Debian Jessie as the starting point and then added all the files by cloning them from OpenWrt's git repository. That generic image can be found here at davidsblog/openwrt-build-15-05. It's an automated build with the source files coming from GitHub. I also wanted to do the same with the next stage - actually compiling OpenWrt from sources ... but when I tried, DockerHub timed out the build. The OpenWrt build process can take a few hours. So the rest could not be done as an automated build.

I used my base image to create a .config file for the WR740n using the make menuconfig command and then copied that .config file and referenced it in my next Dockerfile. This new Dockerfile takes the base image, adds the config file and then calls make to actually build my specific cross-compile environment for the WR740n. If somebody wanted to make a cross-compiler for a different device they would just need to change the config file for their own device and use docker build to create an image.

So I built the image and pushed it out to DockerHub as davidsblog/openwrt-build-wr740n. As long as you have the bandwidth, it's much easier to be able to pull a pre-configured cross-compiler than to set one up from scratch. And it's really easy to use.

This is how I'm using it: I created a script called /usr/local/bin/740ncc which contains this:

#!/bin/bash
docker run --rm -v ${PWD}:/src davidsblog/openwrt-wr740n:latest \
     /bin/sh -c "cd /src; $*"

So then, on my local machine, I navigate to the folder containing the C sources I want to cross-compile. Now I can type something like 740ncc make and the make command will be routed to a docker container which will do the cross-compilation for the WR740n. The compiled program will be on your local machine (not in the Docker container) just as if you had compiled it locally. I think that's very not-bad. I am also using Dockers --rm parameter so that the container is automatically removed afterwards. Here's an example where I'm building my rCPU monitoring webserver for the TP-Link WR740n:

I also discovered something interesting during all this: using the find command inside the same Docker image but on different machines does not always show the results in the same order. This had me puzzling for a while when I was using the find command in one of my scripts. I used the image on Ubuntu and the order of the find results was different to the same image running on my laptop on Elementary OS. In my experience, on the same machine the order of results from find is the same. I was expecting it to be the same for a container too, but obviously you can't rely on that. Interesting.

Docker and the MEAN stack

I have been reading the book: Write Modern Web Apps with the MEAN Stack: Mongo, Express, AngularJS, and Node.js and wanted a way to mess about with some examples on the various machines I use (since I'm frequently switching between Mac, Windows and Linux boxes). So I decided to try and Dockerify the process a bit.

I wanted to build a Docker container that I could use on whatever platform I happened to be sitting in front of. I also wanted to be able to run Node and Mongo in a single container. I realise that outside the development environment you'd likely want to split them, but for some simple development I didn't want to have to start up multiple containers.

Of course, when the container runs, I would want to check that it is working, so I also set about including a very simple MEAN stack application as a kind of "Hello, World!" example. But by using Docker Volumes it is easy to replace this default app with something on your local hard disk or perhaps with a Docker Data Volume Container which only needs to contain your app. So the simple default app just needs to prove that Node and Mongo are running.

So my first question was how to go about running both Node.js and MongoDB in a single container, and the solution I have gone with is documented here. It adds a docker-friendly init system to the Ubuntu image. This means that when the container is shut down the init process will try to shut down other processes gracefully, rather than leaving them to be forcefully killed by the kernel. The init process will also reap any zombie processes which may accumulate.

Having a suitable base image, I could then build my own container having specific versions of Node.js and MongoDB. Using specified versions means that I will have a stable set of binaries which will give repeatable behaviour during my development and testing. I don't want to waste time with version conflicts.

Anyway, the result can be found here: https://github.com/davidsblog/node-mongo/blob/master/Dockerfile. I am still tinkering with it, but if you look at the Dockerfile you'll probably get the idea. Both the location of the Node app and the data directory for MongoDB are set up as volumes, so you can override them at runtime. The container then just becomes an engine based on a set of stable binaries, and you can just provide it with your own code and data. You can test it like this:

docker run --rm -p 8080:8888 davidsblog/node-mongo

...and then point your browser to port 8080 of your container. You should see a simple app which allows you to post comments, where the comments are saved to MongoDB. But to switch out the default app to some of your own code you can do this (assuming you are in the directory where your server.js is located):

docker run --rm -v ${PWD}:/vol/node/start -p 8080:8888 davidsblog/node-mongo

...and you should now be running your own program (as long as it is called server.js and assuming it works on the same port).

In the container, the Node.js code is run via Nodemon, which means that when you make changes, Nodemon will restart the app for you - no manual restarting.

In the meantime, I can now mess about developing applications using the MEAN stack without installing everything on my local machine. Cool!

Docker plus Emscripten

I have started to experiment with Docker. I thought that I'd try it out, because I wanted to use Emscripten to compile some C programs to JavaScript (which uses the asm.js style). Since getting Emscripten set up can take some time, I thought that a pre-configured Docker container might be a useful shortcut. I also wanted to try out Docker on my MacBook at some stage, so that I can do Linuxy things without having to fire up a full Virtual Machine.

But for now, I am doing this on Ubuntu Server 15.04. Setting up Docker was pretty painless, like this:

sudo apt-get install docker.io

...after which I did a reboot to make sure the docker daemon started automatically at boot-time. I then verified everything was OK by doing this:

sudo docker run hello-world

To make it better, you can set up a docker group so that docker does not need to run with sudo. I did that like this:

sudo usermod -aG docker [my_username_here]

After logging out and back in, Docker does not need to be run with sudo. Cool... now to try Emscripten inside docker. I found a container called apiaryio/base-emscripten-dev which looked like it should do the hard work for me. I tried it out like this:

docker run apiaryio/base-emscripten-dev emcc -v

Which will test things out. The first time takes a while and downloads hundreds of megabytes as it pulls down the repository. But eventually a message comes back from the Emscripten compiler. I thought that it would be really cool to be able to call the Emscripten compiler, emcc inside Docker from the host machine. So I came up with this script:

#!/bin/bash
docker run -v ${PWD}:/src apiaryio/base-emscripten-dev \
/bin/sh -c "cd /src; emcc $*"

...which I copied into /usr/local/bin named as demcc and marked it executable (sudo chmod +x /usr/local/bin/demcc). This script does a few things:

  1. mounts the current directory on the host in the Docker container (as /src)
  2. in the docker container, changes to the correct directory
  3. passes any parameters on to emcc in the container

So now I can just do this:

demcc hello.c -o hello.html

Which will compile the local C file into JavaScript, with all the hard work done inside a Docker container. So now Emscripten is working, and I didn't have to set it up. Happy days! I guess this same approach would work with other types of compilation, like cross-compiling for other platforms. I need to try that.