Docker and the MEAN stack

I have been reading the book: Write Modern Web Apps with the MEAN Stack: Mongo, Express, AngularJS, and Node.js and wanted a way to mess about with some examples on the various machines I use (since I'm frequently switching between Mac, Windows and Linux boxes). So I decided to try and Dockerify the process a bit.

I wanted to build a Docker container that I could use on whatever platform I happened to be sitting in front of. I also wanted to be able to run Node and Mongo in a single container. I realise that outside the development environment you'd likely want to split them, but for some simple development I didn't want to have to start up multiple containers.

Of course, when the container runs, I would want to check that it is working, so I also set about including a very simple MEAN stack application as a kind of "Hello, World!" example. But by using Docker Volumes it is easy to replace this default app with something on your local hard disk or perhaps with a Docker Data Volume Container which only needs to contain your app. So the simple default app just needs to prove that Node and Mongo are running.

So my first question was how to go about running both Node.js and MongoDB in a single container, and the solution I have gone with is documented here. It adds a docker-friendly init system to the Ubuntu image. This means that when the container is shut down the init process will try to shut down other processes gracefully, rather than leaving them to be forcefully killed by the kernel. The init process will also reap any zombie processes which may accumulate.

Having a suitable base image, I could then build my own container having specific versions of Node.js and MongoDB. Using specified versions means that I will have a stable set of binaries which will give repeatable behaviour during my development and testing. I don't want to waste time with version conflicts.

Anyway, the result can be found here: https://github.com/davidsblog/node-mongo/blob/master/Dockerfile. I am still tinkering with it, but if you look at the Dockerfile you'll probably get the idea. Both the location of the Node app and the data directory for MongoDB are set up as volumes, so you can override them at runtime. The container then just becomes an engine based on a set of stable binaries, and you can just provide it with your own code and data. You can test it like this:

docker run --rm -p 8080:8888 davidsblog/node-mongo

...and then point your browser to port 8080 of your container. You should see a simple app which allows you to post comments, where the comments are saved to MongoDB. But to switch out the default app to some of your own code you can do this (assuming you are in the directory where your server.js is located):

docker run --rm -v ${PWD}:/vol/node/start -p 8080:8888 davidsblog/node-mongo

...and you should now be running your own program (as long as it is called server.js and assuming it works on the same port).

In the container, the Node.js code is run via Nodemon, which means that when you make changes, Nodemon will restart the app for you - no manual restarting.

In the meantime, I can now mess about developing applications using the MEAN stack without installing everything on my local machine. Cool!

Docker plus Emscripten

I have started to experiment with Docker. I thought that I'd try it out, because I wanted to use Emscripten to compile some C programs to JavaScript (which uses the asm.js style). Since getting Emscripten set up can take some time, I thought that a pre-configured Docker container might be a useful shortcut. I also wanted to try out Docker on my MacBook at some stage, so that I can do Linuxy things without having to fire up a full Virtual Machine.

But for now, I am doing this on Ubuntu Server 15.04. Setting up Docker was pretty painless, like this:

sudo apt-get install docker.io

...after which I did a reboot to make sure the docker daemon started automatically at boot-time. I then verified everything was OK by doing this:

sudo docker run hello-world

To make it better, you can set up a docker group so that docker does not need to run with sudo. I did that like this:

sudo usermod -aG docker [my_username_here]

After logging out and back in, Docker does not need to be run with sudo. Cool... now to try Emscripten inside docker. I found a container called apiaryio/base-emscripten-dev which looked like it should do the hard work for me. I tried it out like this:

docker run apiaryio/base-emscripten-dev emcc -v

Which will test things out. The first time takes a while and downloads hundreds of megabytes as it pulls down the repository. But eventually a message comes back from the Emscripten compiler. I thought that it would be really cool to be able to call the Emscripten compiler, emcc inside Docker from the host machine. So I came up with this script:

#!/bin/bash
docker run -v ${PWD}:/src apiaryio/base-emscripten-dev \
/bin/sh -c "cd /src; emcc $*"

...which I copied into /usr/local/bin named as demcc and marked it executable (sudo chmod +x /usr/local/bin/demcc). This script does a few things:

  1. mounts the current directory on the host in the Docker container (as /src)
  2. in the docker container, changes to the correct directory
  3. passes any parameters on to emcc in the container

So now I can just do this:

demcc hello.c -o hello.html

Which will compile the local C file into JavaScript, with all the hard work done inside a Docker container. So now Emscripten is working, and I didn't have to set it up. Happy days! I guess this same approach would work with other types of compilation, like cross-compiling for other platforms. I need to try that.

Experiments with Randomness

In software, the amount of randomness collected by an Operating System is called entropy. I have recently been messing about with entropy on my Linux boxes. This happened because I was fiddling with https (SSL/TLS) and cryptography relies on a good source of random numbers. The random numbers need to come from somewhere.

On most Linux machines, you can see how much entropy your system has by looking at the value in /proc/sys/kernel/random/entropy_avail.

If your machine runs low on entropy, it means that any cryptographic calculations will either have to use a weak source of random numbers (and therefore be less secure) or will block until there is more entropy (meaning that things on your system might appear to slow down).

So out of curiosity I decided to experiment with this. I took my CPU monitoring webserver and adapted it to watch the entropy available rather than the CPU use. If you're really interested, the code can be found on GitHub here.

NOTE: you probably won't want to install the entropy monitoring webserver on your machine, because you would not want to tell the outside world how much entropy your system has available. I am only using it for experimentation and learning.

But ... here are some experiments that I did:

  • Driver Activity

    On Linux, the system uses various sources of entropy. Some of the entropy comes from drivers - including the keyboard and mouse drivers. This is why a VM or perhaps even a 'headless' machine might struggle to generate good quality random numbers - because it is unlikey to see the same type of driver activity. The following experiment demonstrated the effect of mouse movements on the amount of entropy available in a VM:

    You can see the entropy increasing much more rapidly when the mouse moves.

  • Process Activity

    Starting a process on Linux consumes entropy. So you can try running some commands in a terminal window to see the impact on your entropy:

    It's interesting to see the entropy drop as I'm running simple commands. I believe that a new process consumes entropy because it will use random numbers for the Address Space Layout Randomization.

It was reasonably interesting to do those experiments. There is some useful further reading about random numbers and cryptography here. I also found some work that has been done locally here in Cambridge at this blog which was interesting ... and also involves the Raspberry Pi.

Using https with stunnel and ssl_wrapper

Whilst randomly wandering around GitHub a few weeks ago, I noticed ssl_wrapper, which I thought was interesting. Actually, I quite liked the idea of moving the https stuff into a seperate module. It means that any vulnerabilities found in the SSL/TLS stuff could be patched without having to do anything to the actual web server. I am also a fan of the philosophy of smaller pieces of code which can be well tested independently. I suppose this is just the Unix philosophy. But whatever you call it; I liked the idea.

So I had a bit of a play with ssl_wrapper, and even forked the repository to make the basic instructions a bit easier for me to follow, including the creation of a certificate for testing.

So I tried it out by running my own CPU Monitoring webserver over an https connection. However, in Chrome, I noticed that Google considers it to be using 'obsolete cryptography', like this:

I know that Chrome says this about a lot of existing sites, but I thought that it would be a good idea to try and make that message go away. But so far, I have not managed to do that with ssl_wrapper. In the meantime, I have logged an issue on GitHub.

UPDATE: this has now been fixed (but not by me, by the original authors). You can use ssl_wrapper and Chrome will say you're using 'modern cryptography'.

But since I was now interested in all this stuff, I decided to go and look and see if there are alternatives. Indeed there are, and a good one is stunnel. On my Ubuntu box installing it was a breeze, like this:

sudo apt-get install stunnel4

NOTE: I found that it is sometimes important to refer to stunnel4 with the '4' at the end, because your machine might already have an older version.

I was able to take the same certificate I was using with ssl_wrapper and set it up in stunnel. But straight away I found that Chrome was happier, and declared that I was using modern cryptography, like this.

Nice! Another advantage of stunnel is that there has been more recent activity on maintaining the code, which is somewhat comforting.

So, I am now setting up a seperate GitHub repository to help me to test stunnel on various Linux boxes (I have yet to try it on my Rasperry Pi, for example). It means I can be lazy and do this:

sudo apt-get install stunnel4
git clone https://github.com/davidsblog/stunnel4_config
cd stunnel4_config
sudo make

Which is a pretty painless way to test it out on different machines.

FSV, or "this is a Unix System... I know this"

In this months (June, issue 16) Linux Voice magazine, they mentioned that the 3D File System Visualizer like the one used in the original Jurassic Park movie has been ported to modern Linux machines. It was just too tempting, so I went and tried it out. I managed to get it running on my laptop, this is how it looked:

That video clip was recorded in realtime, so considering this was done on a 5 year old laptop I was quite impressed. I was doing that on Elementary OS (Freya) and these were the commands (I hope that I noted them all down correctly):

git clone https://github.com/mcuelenaere/fsv
sudo apt-get install autogen
sudo apt-get install autoconf
sudo apt-get install libtool
sudo apt-get update
sudo apt-get install libgtk2.0-dev libgl1-mesa-dev libgtkgl2.0-dev
sudo apt-get install libglu1-mesa-dev
cd fsv
./autogen.sh
./configure
make
sudo make install

Then you should find the fsv program in your /usr/local/bin directory. You just need to run it.

Although in the end, it was not really as exciting as I expected ... (perhaps that's because there weren't any Velociraptors) but it was a cool thing to do. Apparently, the version used in the Jurassic Park movie was called FSN and was made for IRIX systems.

Supracafe in Santa Cruz de La Palma

Well, it would not be unusual for me to mention a good place to get coffee in the Canary Islands. We've recently returned from the island of La Palma, which is a fantastic place. We were walking along the main street of the captial, Santa Cruz and noticed this place called 'Supracafe' (on Calle O'Daly, the main shopping street):

It looked tempting, and the views were pretty nice:

Anyway, we had coffee and cake ... and it was the best:

If you go there, then we'd recommend it. You can recognise the cafe by the mural on their wall:

Both the cake and the coffee were delicious. The coffee was like a mini Flat White or a large Cortado. The Chocolate cake contained a fantastic chocolate and hazelnut mousse and was not too heavy despite the fact that we were given a massive slice.

Apart from that, it was just nice to sit on the street, take a break and watch the world go by...

"Programming Projects In C..." book

It's good to see that some new C programming books still get published. These days they are quite rare, but it's great to see when one does appear. At the moment I am enjoying this one:

Programming Projects In C For Students Of Engineering, Science, and Mathematics.

I bought mine when I found a copy whilst browsing the Cambridge University Press bookshop. As I flipped through the book in the store I stumbled on the chapter about makefiles, and realised that it was going to be good there and then. I liked the iterative way that the makefiles were presented - as a series of layered enhancements.

It even motivated me to go away and improve some of my makefiles, which were already working fine ... but could be done even better. The advantage is that now I have more of a chance to reuse them on other projects.

But now, I've had an opportunity to read more of this book, and have really enjoyed it. There are lots of little gems tucked away which would be worth taking note of. I reckon that most C programmers would find something interesting in this book, not just developers working in the fields of science, mathematics and engineering.

The C programming language has been around for a long time now, I think that even the author admitted to using certain blocks of code for 30 years or so. But for me, that's part of the beauty of it. There are not many languages where you could have written a block of code decades ago and find that it's just as good today.

If you're doing C programming and are curious to see how somebody else uses the language then you'll probably enjoy reading this book.

Elementary OS upgrade

I've just upgraded my old laptop OS (it's a Toshiba Satellite T130, and must be about 5 years old). Previously, I had been running Elementary OS Luna which I found pretty impressive. It had a couple of niggles ... for example the file manager app always seemed to take ages to load. Now that the new version, Freya has come out, I decided to try it. I went for a complete wipe and reinstall.

I am quite pleased with the upgrade. The slow file manager startup time has even gone away, and the whole OS seems even more polished. However, I have had a problem with the pointer getting corrupted if I did drag and drop, which was annoying, I am currently trying this fix, hopefully that will make it go away. Anyway, I celebrated a successful reinstall by going off to the Nasa JPL Wallpapers site and grabbing myself a nice background, so here's a screenshot:

I think that it's still the most visually appealing Linux I have tried and I'm quite happy that I've upgraded to the latest version. It runs nice and fast on my old laptop, and it's nice to have something that looks good without eating all your CPU cycles.

Remote monitoring *VoCore* CPU use

So I really wrote my CPU monitoring webserver program to be able to monitor my Raspberry Pi, but hoped that the code should run on other Linux machines too. Indeed, I have also been testing it in a virtual box running Debian. The code is runnable from Xcode on my Mac, but unfortunately it's a simulation only for testing purposes, because /proc/stat does not exist on Mac OS. So, for the moment, it definitely won't work on the Mac.

But I did think that it would work on OpenWrt, so decided to try it on my VoCore. In anticipation I had already put the necessary stuff in the makefile, so all I needed to do was go to a box with OpenWrt cross compilation set up and do this:  make vocore.

Then I copied the resulting binary executable to my VoCore and it worked straight away. I even made a recording of the initial run for posterity:

Since all the resources (html and javascript files) are embedded in the executable it is really very simple to do, just copy and run a single file. The version compiled for the VoCore came out at 112 Kb in size. Hopefully that's not too bad. I could compress the resource files if space was really important, but I haven't felt the need to do that.

It's pretty good to see this running on a machine about the size of a postage stamp. Obviously I could make it just show a single graph when it runs on a single core machine. Perhaps I'll do that when I get some time.

Sinclair nostalgia

A while back, in a fit of 1980s nostalgia, I realised that my place of work is within walking distance of the old Sinclair building in Cambridge. So I decided to walk over there at lunchtime and see if I could still find it. This is how the building looked back in the 80s:

...which is a single frame taken from this YouTube clip, which gives a glimpse of the inside too:

Anyway, this is what I found when I walked over there recently:

I was quite pleased that there was still a small street sign bearing the name "Sinclair" at the entrance. The building looks like it's part of Anglia Ruskin University nowadays.

It's a shame that the big Sinclair logo was taken down. But the building is clearly recognisable, which is pretty cool.