Cross-compiling for OpenWrt with various programming languages

I have been messing about with the Nim programming language quite a lot recently. Because it compiles to C code, this means that it's very good for programming lots of devices - since most everything has a C compiler. I have to say that I am drawn to Nim's main goal of being efficient, because I dislike bloat. However, Nim has some disadvantages, and for me the biggest is that it has not reached a v1 release yet. So the language is not as stable as it could be, and I get nervous each time a new version of the compiler is released, wondering if my code will still work.

In fairness, Nim is quite stable, I don't want to be misleading. But until it reaches a version 1 release I will continue to be cautious. I would not use Nim for important work (yet) - although I am using it for hobbyist type stuff. And I've been enjoying it too.

But, in recent weeks I thought that it would be only fair for me to try out some other languages. So I've been doing some experiments on my linux box, mostly inside docker containers. Cross-compiling for embedded linux is high on my personal list of requirements, so I have been trying to target the latest stable release of OpenWrt (15.05.1) running on the Ralink RT5350F chipset. This can be done very easily with Nim. So, I tried that with a few other languages to see how I got on. Thank goodness for docker, it has made this process so much easier and when I'm finished it's very easy to tidy up my mess!

1. Go

At first, I thought that it was going to be easy with Go. It looked as simple as this command:

~# GOOS=linux GOARCH=mipsle /usr/local/go/bin/go build -v hello.go

Which looked all-good at first glance:

~# file hello
hello: ELF 32-bit LSB executable, MIPS, MIPS32 version 1 (SYSV), 
statically linked, not stripped

But when I ran it on my device, I just got this 'Illegal instruction' message:

~# /tmp/hello 
Illegal instruction
~# 

I then found several issues on the golang GitHub repo about MIPS32 support. So it looks like the easy way is not so easy after all. Anyway, I did eventually get a 'hello world' program cross compiled for OpenWrt with Go. And this time I was able to run the executable on my target device. In the end I followed the first set of these instructions. However, the resulting binary was 1.8 Mb just for the hello world program. At this point I decided to give up, and by that time I had read posts from other people saying that cross-compiling Go code for embedded devices resulted in binaries that were big and slow. Maybe in the future things will get better. I hope so.

2. Rust

It appears that Rust would be easier for cross-compiling to OpenWrt if I was using the trunk version of OpenWrt, because support for the target mipsel-unknown-linux-musl used by OpenWrt trunk is much better than mipsel-unknown-linux-uclibc which I need for the stable release (which I'm running). So I expect that this will become easier in time. In any case, this seems to work:

~# rustup target add mipsel-unknown-linux-musl
info: downloading component 'rust-std' for 'mipsel-unknown-linux-musl'
 15.3 MiB /  15.3 MiB (100 %)   1.5 MiB/s ETA:   0 s                
info: installing component 'rust-std' for 'mipsel-unknown-linux-musl'

Whereas this version (with uClibc) doesn't:

~# rustup target add mipsel-unknown-linux-uclibc
error: toolchain 'stable-x86_64-unknown-linux-gnu' does not contain 
component 'rust-std' for target 'mipsel-unknown-linux-uclibc'

So, maybe I should just try that again when I've moved onto a later version of OpenWrt (or one of the LEDE releases perhaps).

3. Vala

I found the Vala language by accident, but it is apparently used a lot by the Elementary OS project - which gives it some kudos in my book. Additionally, the syntax is very much like C#, which means it should be somewhat familiar to me. Like Nim, it also compiles to C so I thought that support for embedded linux should be good. However, it depends on some libraries (like GLib2) which I don't have available on my device. I could probably make this work if I wanted, but I had little interest in doing that. So I didn't proceed any further.

4. Crystal

I had a very brief look at Crystal which does have some support for cross-compiling. It looked interesting, although I found some of the syntax to be a bit strange in places. However I don't think it supports the target device I am looking for. But I did get this far, purely out of interest:

~# crystal build hello.cr --cross-compile --target "x86_64-unknown-linux-gnu"
cc hello.o -o hello  -rdynamic  -lpcre -lgc -lpthread 
/opt/crystal/src/ext/libcrystal.a -levent -lrt -ldl -L/usr/lib -L/usr/local/lib ~#

I didn't go any further than that, but it was interesting to have a little play.

In summary...

It actually looks like Nim is a pretty sensible choice for what I'm doing. In comparison to other languages it's amazingly simple to set up for cross-compilation. So maybe I just need to put up with any minor instabilities in syntax that are likely to appear. But it is a good reminder that I need to try Nim on OpenWrt trunk (or one of the LEDE releases), because they've moved from uClibc to musl. I don't know if that will introduce any issues. I need to come back to that...

At least in the short term I am happy to keep on messing about with Nim. And I'm happy that I have at least tried some alternatives. I will try to keep my eye on Go and Rust, both of which showed signs of promise. When I have a device running OpenWrt trunk, or LEDE, then perhaps I will give Rust another try.

A little more about Nim

I am still tinkering with the Nim programming language, and quite enjoying it. I find the language to be well thought out, and it's very easy to build little libraries which have unit tests included in them. I like that idea.

But I have noticed that the binaries can be quite a lot bigger than my equivalent C programs, which I suppose is to be expected. So I have also been experimenting with the UPX packer which helps to keep the file size down ... this could be useful if you want to run your Nim programs on something tiny, like a router.

In the end, I did buy a copy of the Nim in Action book, available through Manning's early access program. I'd recommend this book to people who are learning Nim, I found it easy to follow and enjoyable to read.

Nim in Action book

Because Nim has not reached v1 yet, for the moment I'm only using it for experimentation and hobbyist type stuff. But when it actually hits v1, I think that I'd consider using it for real work too.

First steps into the world of Nim

I was recently doing a bit of reading up on the Rust programming language, but a stray comment somewhere about the Nim programming language sent me off on a bit of a tangent. The thing that really got me interested in Nim was that it compiles to C, and I noticed that this brings quite some options for portability. One of the reasons why I like programming in C is that you can run the code on all kinds of machines, from tiny embedded devices to supercomputers. But that does come at a cost, because it takes longer and you often have lots more typing to do because of all that boilerpate stuff.

But it looks like Nim would allow you to be quite productive, whilst the resulting executables should still be efficient and fast. To get some ideas, I went off to Rosetta Code and found the Nim code for a simple webserver which looks like this:

import asynchttpserver, asyncdispatch
 
proc cb(req: Request) {.async.} =
  await req.respond(Http200, "Hello, World!")
 
asyncCheck newAsyncHttpServer().serve(Port(8080), cb)
runForever()

Being able to create a webserver with just a few lines of code, and with the potential for the code to be portable and run on all kinds of devices seemed very tempting! By this point, I'd forgotten about Rust and wanted to explore Nim a bit further.

So whilst Nim has not reached v1.0 yet, it certainly looked very compelling. The compiler documentation made it look like cross-compilation was not that difficult, so I decided to try and cross-compile some code for a router running OpenWrt. Before going off and spendng a lot of time learning a new language, I wanted to see that I *really* can write portable code. I was very happy to see that in just a few minutes I had the example webserver code running on my TP-Link WR740N, as shown here:

To my amazement and joy, this really wasn't that hard. After installing Nim, I had to edit Nim/config/nim.cfg to point to my cross-compile toolchain, so I ended up with these lines:

mips.linux.gcc.exe =
 "/home/openwrt/openwrt/staging_dir/toolchain-mips_34kc_gcc-4.8-linaro_uClibc-0.9.33.2/
bin/mips-openwrt-linux-gcc"
mips.linux.gcc.linkerexe =
 "/home/openwrt/openwrt/staging_dir/toolchain-mips_34kc_gcc-4.8-linaro_uClibc-0.9.33.2/
bin/mips-openwrt-linux-gcc"

Then, after doing that I simply needed to pass the correct parameters to the Nim compiler, like this:

nim c --cpu:mips --os:linux webserver.nim

Which cross-compiled the example webserver ready for it to run on my TP-Link WR740N. That actually seems pretty awesome. I think that I need to learn more, perhaps I'll even go and buy the Nim in Action book. All this stuff worked perfectly without any trouble, which is pretty rare, so I am left feeling very impressed so far.

Radio streaming with OpenWrt

I have been doing a bit more playing around with USB sound cards and OpenWrt. I thought that a pretty good use for my HooToo HT-TM02 would be as an internet radio player. So I have been experimenting. I installed the following packages (I already had USB support working):

  • kmod-usb-audio
  • madplay
  • alsa-utils

Which enables me to play internet radio streams from the command line. I have been trying the stations listed on www.intenet-radio.com. If the station shows a PLS link, I found that you can download it as a playlist-file and extract the URL from inside using a text editor. In many cases, this seems to work very well (but not all streams worked). Then just use this command on OpenWrt and replace the URL:

wget -O - [URL] | madplay - -a-30 -o wave:- | aplay

Which will play the stream (I'm dropping the volume with -30 in that example). In reality, I've been using all the commands in quiet mode and in the background, like this:

wget -q -O - [URL] | madplay -Q - -a-30 -o wave:- | aplay -q &

...which just plays the stream in the background without any other output to the console. In addition, I also found the BBC radio station streams listed on this website which is pretty useful (and also the French station, Fip for good measure):

  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio1_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio1xtra_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio2_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio3_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio4fm_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio4lw_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio4extra_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio5live_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_6music_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_asianet_mf_p
  • http://bbcmedia.ic.llnwd.net/stream/bbcmedia_nangaidheal_mf_p
  • http://audio.scdn.arkena.com/11016/fip-midfi128.mp3

A better way to set the volume from the command line, is to use the amixer command, I'm using something like this:

amixer sset Headphone 50%

Although the more fancy alsamixer command seems to work fine as well.

After using that for a while, I think it's a very good use for the HooToo, so I am building a simple front end and will keep it as a miniature internet radio player.

Hootoo + OpenWrt + Velleman K8055N

One of the things that I had kicking around at home was a Velleman K8055N (actually mine was the pre-made version, the the VM110N) - but I think they are essentialy the same thing. I decided that it would be really cool to try it out on OpenWrt ... so that eventually I could try interfacing a tiny embedded Linux machine with the outside world.

Some quick googling led me to this driver which I tried on a Debian Virtual Machine first, and then decided to run it on my Hootoo HT-TM02. I used a docker container to build the kernel module according to the instructions, and then copied it over onto the device for installation.

The first attempt at installing the module gave the following error:

Collected errors:
 * satisfy_dependencies_for: Cannot satisfy the following dependencies for
 kmod-k8055d:
 * 	kernel (= 3.18.23-1-e2416fa0abee73ea34d947db4520f4e9) *

Which I assumed was just OpenWrt being a bit over cautious, because I was actually running kernel v3.18.23. So I took the risk and overrode the error, with this command:

opkg install /tmp/kmod-k8055d_0.2.1_3.18.23-ramips-1_ramips_24kec.ipk --nodeps

Perfect, it worked! So the K8055N is now accessible in the file system under /proc. To celebrate, I hooked up the digital-to-analoge converter to an old voltmeter, so I could watch the needle bounce around in response to what I'm typing on the command line:

I think that's pretty awesome, although I don't know what I'll use it for. Perhaps I'll build some kind of Web API and allow inputs and outputs to be controlled from a browser. Or ... come to think of it, I could build a really cool CPU meter with the output showing via the needle of the voltmeter.

Hacking your Hootoo

For my wedding anniversary this year, my wife bought me a Hootoo HT-TM02 (amongst other things). How nerdy is that? But it's a nice little machine which can have the default firmware replaced with OpenWrt. For about USD $16.00 it's a very cheap way to have an embedded Linux machine. Personally, I didn't even try the standard firmware and put OpenWrt on straight away. It was pretty easy, but you need a USB stick to give the device some extra storage during the initial upgrade process. The instructions are here and the process was very smooth.

As soon as I had the initial OpenWrt installation, I decided to switch to trunk OpenWrt, so I used the sysupgrade command to install the 15.05 trunk image from here. So this left me with a device running trunk OpenWrt. I then held the reset button in (whilst the device was powered on) for 30 seconds to make sure that the configs were all at default settings.

Next, I decided to install relayd to bridge the LAN (ethernet) port with my existing Wi-Fi network. I followed the instructions here and that seemed to work fine. I felt that I had followed those instructions exactly; but the OpenWrt firewall was still getting in the way. So I just went ahead and disabled the firewall by using:

/etc/init.d/firewall disable

After doing that, I can switch off the Wi-Fi on my laptop, connect a network cable between the Hootoo and the laptop and I'm on my network. The Hootoo is acting as a bridge between my own Wi-Fi and its own ethernet socket, very nice!

The next thing I need to do is cross-compile some code and get it running on the Hootoo.

An update to my 'rCPU' remote monitoring webserver

I thought that it was about time I had a fiddle with the tool I made to remotely monitor CPU use. I have mainly been using it on my Raspberry Pi 2. I switched the javascript library from Smoothie Charts to Flot, in the hope of getting better browser support. But I was able to take advantage of some features of flot at the same time. So this is how it looks now:

That clip was recorded by remotely monitoring my Raspberry Pi 2 from my MacBook. Hopefully it's a worthwhile improvement. The code is all on GitHub: davidsblog/rCPU.

More Docker as a cross-compiler

Now that I work in the Silicon Shed, I decided that I needed to have some more Linux in there ... and restore some balance to the universe. So I thought that a good place to start would be with OpenWrt running on my router. I already had a TP-Link WR740n lying around which I had been playing with before. So I went and got the latest version of OpenWrt (which was Chaos Calmer, 15.05) and installed that. Cool! Now I have a nice little router with plenty of features to play with, and it's an embedded Linux box as well.

But having gotten that far, I decided to set up a new cross-compile environment. I had done this before using Virtual Machines, but this time I wanted to use Docker to make it easier to compile my C programs using whatever machine I happen to be sitting in front of. I started by creating a base Docker image with all the files in place to give me a buildroot. In my Dockerfile I used Debian Jessie as the starting point and then added all the files by cloning them from OpenWrt's git repository. That generic image can be found here at davidsblog/openwrt-build-15-05. It's an automated build with the source files coming from GitHub. I also wanted to do the same with the next stage - actually compiling OpenWrt from sources ... but when I tried, DockerHub timed out the build. The OpenWrt build process can take a few hours. So the rest could not be done as an automated build.

I used my base image to create a .config file for the WR740n using the make menuconfig command and then copied that .config file and referenced it in my next Dockerfile. This new Dockerfile takes the base image, adds the config file and then calls make to actually build my specific cross-compile environment for the WR740n. If somebody wanted to make a cross-compiler for a different device they would just need to change the config file for their own device and use docker build to create an image.

So I built the image and pushed it out to DockerHub as davidsblog/openwrt-build-wr740n. As long as you have the bandwidth, it's much easier to be able to pull a pre-configured cross-compiler than to set one up from scratch. And it's really easy to use.

This is how I'm using it: I created a script called /usr/local/bin/740ncc which contains this:

#!/bin/bash
docker run --rm -v ${PWD}:/src davidsblog/openwrt-wr740n:latest \
     /bin/sh -c "cd /src; $*"

So then, on my local machine, I navigate to the folder containing the C sources I want to cross-compile. Now I can type something like 740ncc make and the make command will be routed to a docker container which will do the cross-compilation for the WR740n. The compiled program will be on your local machine (not in the Docker container) just as if you had compiled it locally. I think that's very not-bad. I am also using Dockers --rm parameter so that the container is automatically removed afterwards. Here's an example where I'm building my rCPU monitoring webserver for the TP-Link WR740n:

I also discovered something interesting during all this: using the find command inside the same Docker image but on different machines does not always show the results in the same order. This had me puzzling for a while when I was using the find command in one of my scripts. I used the image on Ubuntu and the order of the find results was different to the same image running on my laptop on Elementary OS. In my experience, on the same machine the order of results from find is the same. I was expecting it to be the same for a container too, but obviously you can't rely on that. Interesting.

Safari Books Online

A few months back I subscribed to Safari Books Online. I am loving it! It's like living in my own bookshop library. Their Safari Queue App works well on all my iOS and Android devices (which is good for offline reading) and otherwise I can just read in a browser on Laptops and Desktops:

I have to admit that I have a great deal of respect for the O'Reilly Animal Books which is what attracted me to Safari Books ... I initially thought that they only really had the O'Reilly catalogue available. But that's just wrong, their library has over 30,000 items from many publishers. I haven't even tried any of the video content yet - but I will get round to that.

I was very pleased to see that Safari Books Online had plenty of content from Cambridge University Press, at the time of writing they had 519 items from them. There are some things there that I'm sure I will take a look at.

The only problem is that I haven't finished any books yet. I have especially enjoyed parts of:

...and more (like stuff on C programming and Docker too). So I have been able to learn a lot of things, which makes me happy :-) But I also miss the satisfaction you feel when you finish reading something cover-to-cover. Maybe I'll get used to that, or maybe I will start getting more selective as I get into it.

I still find that the information found in a good quality book is better than the type of things you find when searching the internet. Which is why I still love technology books. But if I bought all the books I found which contained an iteresting chapter I would quickly run out of space and money. That's why Safari Books Onine is soooo very cool.

Docker and the MEAN stack

I have been reading the book: Write Modern Web Apps with the MEAN Stack: Mongo, Express, AngularJS, and Node.js and wanted a way to mess about with some examples on the various machines I use (since I'm frequently switching between Mac, Windows and Linux boxes). So I decided to try and Dockerify the process a bit.

I wanted to build a Docker container that I could use on whatever platform I happened to be sitting in front of. I also wanted to be able to run Node and Mongo in a single container. I realise that outside the development environment you'd likely want to split them, but for some simple development I didn't want to have to start up multiple containers.

Of course, when the container runs, I would want to check that it is working, so I also set about including a very simple MEAN stack application as a kind of "Hello, World!" example. But by using Docker Volumes it is easy to replace this default app with something on your local hard disk or perhaps with a Docker Data Volume Container which only needs to contain your app. So the simple default app just needs to prove that Node and Mongo are running.

So my first question was how to go about running both Node.js and MongoDB in a single container, and the solution I have gone with is documented here. It adds a docker-friendly init system to the Ubuntu image. This means that when the container is shut down the init process will try to shut down other processes gracefully, rather than leaving them to be forcefully killed by the kernel. The init process will also reap any zombie processes which may accumulate.

Having a suitable base image, I could then build my own container having specific versions of Node.js and MongoDB. Using specified versions means that I will have a stable set of binaries which will give repeatable behaviour during my development and testing. I don't want to waste time with version conflicts.

Anyway, the result can be found here: https://github.com/davidsblog/node-mongo/blob/master/Dockerfile. I am still tinkering with it, but if you look at the Dockerfile you'll probably get the idea. Both the location of the Node app and the data directory for MongoDB are set up as volumes, so you can override them at runtime. The container then just becomes an engine based on a set of stable binaries, and you can just provide it with your own code and data. You can test it like this:

docker run --rm -p 8080:8888 davidsblog/node-mongo

...and then point your browser to port 8080 of your container. You should see a simple app which allows you to post comments, where the comments are saved to MongoDB. But to switch out the default app to some of your own code you can do this (assuming you are in the directory where your server.js is located):

docker run --rm -v ${PWD}:/vol/node/start -p 8080:8888 davidsblog/node-mongo

...and you should now be running your own program (as long as it is called server.js and assuming it works on the same port).

In the container, the Node.js code is run via Nodemon, which means that when you make changes, Nodemon will restart the app for you - no manual restarting.

In the meantime, I can now mess about developing applications using the MEAN stack without installing everything on my local machine. Cool!