Hootoo + OpenWrt + Velleman K8055N

One of the things that I had kicking around at home was a Velleman K8055N (actually mine was the pre-made version, the the VM110N) - but I think they are essentialy the same thing. I decided that it would be really cool to try it out on OpenWrt ... so that eventually I could try interfacing a tiny embedded Linux machine with the outside world.

Some quick googling led me to this driver which I tried on a Debian Virtual Machine first, and then decided to run it on my Hootoo HT-TM02. I used a docker container to build the kernel module according to the instructions, and then copied it over onto the device for installation.

The first attempt at installing the module gave the following error:

Collected errors:
 * satisfy_dependencies_for: Cannot satisfy the following dependencies for
 kmod-k8055d:
 * 	kernel (= 3.18.23-1-e2416fa0abee73ea34d947db4520f4e9) *

Which I assumed was just OpenWrt being a bit over cautious, because I was actually running kernel v3.18.23. So I took the risk and overrode the error, with this command:

opkg install /tmp/kmod-k8055d_0.2.1_3.18.23-ramips-1_ramips_24kec.ipk --nodeps

Perfect, it worked! So the K8055N is now accessible in the file system under /proc. To celebrate, I hooked up the digital-to-analoge converter to an old voltmeter, so I could watch the needle bounce around in response to what I'm typing on the command line:

I think that's pretty awesome, although I don't know what I'll use it for. Perhaps I'll build some kind of Web API and allow inputs and outputs to be controlled from a browser. Or ... come to think of it, I could build a really cool CPU meter with the output showing via the needle of the voltmeter.

Hacking your Hootoo

For my wedding anniversary this year, my wife bought me a Hootoo HT-TM02 (amongst other things). How nerdy is that? But it's a nice little machine which can have the default firmware replaced with OpenWrt. For about USD $16.00 it's a very cheap way to have an embedded Linux machine. Personally, I didn't even try the standard firmware and put OpenWrt on straight away. It was pretty easy, but you need a USB stick to give the device some extra storage during the initial upgrade process. The instructions are here and the process was very smooth.

As soon as I had the initial OpenWrt installation, I decided to switch to trunk OpenWrt, so I used the sysupgrade command to install the 15.05 trunk image from here. So this left me with a device running trunk OpenWrt. I then held the reset button in (whilst the device was powered on) for 30 seconds to make sure that the configs were all at default settings.

Next, I decided to install relayd to bridge the LAN (ethernet) port with my existing Wi-Fi network. I followed the instructions here and that seemed to work fine. I felt that I had followed those instructions exactly; but the OpenWrt firewall was still getting in the way. So I just went ahead and disabled the firewall by using:

/etc/init.d/firewall disable

After doing that, I can switch off the Wi-Fi on my laptop, connect a network cable between the Hootoo and the laptop and I'm on my network. The Hootoo is acting as a bridge between my own Wi-Fi and its own ethernet socket, very nice!

The next thing I need to do is cross-compile some code and get it running on the Hootoo.

An update to my 'rCPU' remote monitoring webserver

I thought that it was about time I had a fiddle with the tool I made to remotely monitor CPU use. I have mainly been using it on my Raspberry Pi 2. I switched the javascript library from Smoothie Charts to Flot, in the hope of getting better browser support. But I was able to take advantage of some features of flot at the same time. So this is how it looks now:

That clip was recorded by remotely monitoring my Raspberry Pi 2 from my MacBook. Hopefully it's a worthwhile improvement. The code is all on GitHub: davidsblog/rCPU.

More Docker as a cross-compiler

Now that I work in the Silicon Shed, I decided that I needed to have some more Linux in there ... and restore some balance to the universe. So I thought that a good place to start would be with OpenWrt running on my router. I already had a TP-Link WR740n lying around which I had been playing with before. So I went and got the latest version of OpenWrt (which was Chaos Calmer, 15.05) and installed that. Cool! Now I have a nice little router with plenty of features to play with, and it's an embedded Linux box as well.

But having gotten that far, I decided to set up a new cross-compile environment. I had done this before using Virtual Machines, but this time I wanted to use Docker to make it easier to compile my C programs using whatever machine I happen to be sitting in front of. I started by creating a base Docker image with all the files in place to give me a buildroot. In my Dockerfile I used Debian Jessie as the starting point and then added all the files by cloning them from OpenWrt's git repository. That generic image can be found here at davidsblog/openwrt-build-15-05. It's an automated build with the source files coming from GitHub. I also wanted to do the same with the next stage - actually compiling OpenWrt from sources ... but when I tried, DockerHub timed out the build. The OpenWrt build process can take a few hours. So the rest could not be done as an automated build.

I used my base image to create a .config file for the WR740n using the make menuconfig command and then copied that .config file and referenced it in my next Dockerfile. This new Dockerfile takes the base image, adds the config file and then calls make to actually build my specific cross-compile environment for the WR740n. If somebody wanted to make a cross-compiler for a different device they would just need to change the config file for their own device and use docker build to create an image.

So I built the image and pushed it out to DockerHub as davidsblog/openwrt-build-wr740n. As long as you have the bandwidth, it's much easier to be able to pull a pre-configured cross-compiler than to set one up from scratch. And it's really easy to use.

This is how I'm using it: I created a script called /usr/local/bin/740ncc which contains this:

#!/bin/bash
docker run --rm -v ${PWD}:/src davidsblog/openwrt-wr740n:latest \
     /bin/sh -c "cd /src; $*"

So then, on my local machine, I navigate to the folder containing the C sources I want to cross-compile. Now I can type something like 740ncc make and the make command will be routed to a docker container which will do the cross-compilation for the WR740n. The compiled program will be on your local machine (not in the Docker container) just as if you had compiled it locally. I think that's very not-bad. I am also using Dockers --rm parameter so that the container is automatically removed afterwards. Here's an example where I'm building my rCPU monitoring webserver for the TP-Link WR740n:

I also discovered something interesting during all this: using the find command inside the same Docker image but on different machines does not always show the results in the same order. This had me puzzling for a while when I was using the find command in one of my scripts. I used the image on Ubuntu and the order of the find results was different to the same image running on my laptop on Elementary OS. In my experience, on the same machine the order of results from find is the same. I was expecting it to be the same for a container too, but obviously you can't rely on that. Interesting.

The Silicon Shed

I've been a bit quiet recently, my blog hasn't gotten much attention. But I have been getting up to speed with my new job and so I've been adjusting to my new routine.

Nowadays I work from home, which is nice because I don't have to commute. I am particularly enjoying that, because sitting in the car for hours a day always seemed like a waste of time. This also means that I don't need to wake up at 6am each day...

But to separate home life from work life I have made a separate office by converting my garage. So this means that I don't actually work in the house. I find that this really helps - I don't get distracted during working hours. It's worked out very well. Here's a picture of me sitting at my new desk:

My colleagues have named my new workspace the Silicon Shed and the name has really stuck. I have even tried to set things up so that a colleague can come over and join me for a bit of pair-programming. When I was reading the book "Hello, Startup" this sentence particularly made me smile:

"At this very moment, somewhere in the world, two programmers are sitting in a garage and creating our future, one line of code at a time."

These days I find myself mostly working with Azure WebJobs (which are awesome) and doing lots of C# async stuff. But when you work for a small company anything can happen, so that's a massive oversimplification really. Every day I'm learning something new, which is part of the fun.

La Palma minecraft addiction?

On the Spanish island of La Palma in the Canaries, local town planners deny they've become addicted to Minecraft:

Well OK, just kidding, but that's the kind of thing that went through my head when I took the photo... sorry.

Safari Books Online

A few months back I subscribed to Safari Books Online. I am loving it! It's like living in my own bookshop library. Their Safari Queue App works well on all my iOS and Android devices (which is good for offline reading) and otherwise I can just read in a browser on Laptops and Desktops:

I have to admit that I have a great deal of respect for the O'Reilly Animal Books which is what attracted me to Safari Books ... I initially thought that they only really had the O'Reilly catalogue available. But that's just wrong, their library has over 30,000 items from many publishers. I haven't even tried any of the video content yet - but I will get round to that.

I was very pleased to see that Safari Books Online had plenty of content from Cambridge University Press, at the time of writing they had 519 items from them. There are some things there that I'm sure I will take a look at.

The only problem is that I haven't finished any books yet. I have especially enjoyed parts of:

...and more (like stuff on C programming and Docker too). So I have been able to learn a lot of things, which makes me happy :-) But I also miss the satisfaction you feel when you finish reading something cover-to-cover. Maybe I'll get used to that, or maybe I will start getting more selective as I get into it.

I still find that the information found in a good quality book is better than the type of things you find when searching the internet. Which is why I still love technology books. But if I bought all the books I found which contained an iteresting chapter I would quickly run out of space and money. That's why Safari Books Onine is soooo very cool.

Bad Robot!

In our house, we love our Roomba robot vacuum cleaner. We have had ours for years and it is pretty well behaved. Just occasionally it eats a stray sock or something. But its seems to have had a bad day this time:

Oh dear! It must have bumped the fireplace and then gotten tangled up in the resulting chaos. No real harm done, but it did look pretty funny.

Docker and the MEAN stack

I have been reading the book: Write Modern Web Apps with the MEAN Stack: Mongo, Express, AngularJS, and Node.js and wanted a way to mess about with some examples on the various machines I use (since I'm frequently switching between Mac, Windows and Linux boxes). So I decided to try and Dockerify the process a bit.

I wanted to build a Docker container that I could use on whatever platform I happened to be sitting in front of. I also wanted to be able to run Node and Mongo in a single container. I realise that outside the development environment you'd likely want to split them, but for some simple development I didn't want to have to start up multiple containers.

Of course, when the container runs, I would want to check that it is working, so I also set about including a very simple MEAN stack application as a kind of "Hello, World!" example. But by using Docker Volumes it is easy to replace this default app with something on your local hard disk or perhaps with a Docker Data Volume Container which only needs to contain your app. So the simple default app just needs to prove that Node and Mongo are running.

So my first question was how to go about running both Node.js and MongoDB in a single container, and the solution I have gone with is documented here. It adds a docker-friendly init system to the Ubuntu image. This means that when the container is shut down the init process will try to shut down other processes gracefully, rather than leaving them to be forcefully killed by the kernel. The init process will also reap any zombie processes which may accumulate.

Having a suitable base image, I could then build my own container having specific versions of Node.js and MongoDB. Using specified versions means that I will have a stable set of binaries which will give repeatable behaviour during my development and testing. I don't want to waste time with version conflicts.

Anyway, the result can be found here: https://github.com/davidsblog/node-mongo/blob/master/Dockerfile. I am still tinkering with it, but if you look at the Dockerfile you'll probably get the idea. Both the location of the Node app and the data directory for MongoDB are set up as volumes, so you can override them at runtime. The container then just becomes an engine based on a set of stable binaries, and you can just provide it with your own code and data. You can test it like this:

docker run --rm -p 8080:8888 davidsblog/node-mongo

...and then point your browser to port 8080 of your container. You should see a simple app which allows you to post comments, where the comments are saved to MongoDB. But to switch out the default app to some of your own code you can do this (assuming you are in the directory where your server.js is located):

docker run --rm -v ${PWD}:/vol/node/start -p 8080:8888 davidsblog/node-mongo

...and you should now be running your own program (as long as it is called server.js and assuming it works on the same port).

In the container, the Node.js code is run via Nodemon, which means that when you make changes, Nodemon will restart the app for you - no manual restarting.

In the meantime, I can now mess about developing applications using the MEAN stack without installing everything on my local machine. Cool!

Docker plus Emscripten

I have started to experiment with Docker. I thought that I'd try it out, because I wanted to use Emscripten to compile some C programs to JavaScript (which uses the asm.js style). Since getting Emscripten set up can take some time, I thought that a pre-configured Docker container might be a useful shortcut. I also wanted to try out Docker on my MacBook at some stage, so that I can do Linuxy things without having to fire up a full Virtual Machine.

But for now, I am doing this on Ubuntu Server 15.04. Setting up Docker was pretty painless, like this:

sudo apt-get install docker.io

...after which I did a reboot to make sure the docker daemon started automatically at boot-time. I then verified everything was OK by doing this:

sudo docker run hello-world

To make it better, you can set up a docker group so that docker does not need to run with sudo. I did that like this:

sudo usermod -aG docker [my_username_here]

After logging out and back in, Docker does not need to be run with sudo. Cool... now to try Emscripten inside docker. I found a container called apiaryio/base-emscripten-dev which looked like it should do the hard work for me. I tried it out like this:

docker run apiaryio/base-emscripten-dev emcc -v

Which will test things out. The first time takes a while and downloads hundreds of megabytes as it pulls down the repository. But eventually a message comes back from the Emscripten compiler. I thought that it would be really cool to be able to call the Emscripten compiler, emcc inside Docker from the host machine. So I came up with this script:

#!/bin/bash
docker run -v ${PWD}:/src apiaryio/base-emscripten-dev \
/bin/sh -c "cd /src; emcc $*"

...which I copied into /usr/local/bin named as demcc and marked it executable (sudo chmod +x /usr/local/bin/demcc). This script does a few things:

  1. mounts the current directory on the host in the Docker container (as /src)
  2. in the docker container, changes to the correct directory
  3. passes any parameters on to emcc in the container

So now I can just do this:

demcc hello.c -o hello.html

Which will compile the local C file into JavaScript, with all the hard work done inside a Docker container. So now Emscripten is working, and I didn't have to set it up. Happy days! I guess this same approach would work with other types of compilation, like cross-compiling for other platforms. I need to try that.