First steps into the world of Nim

I was recently doing a bit of reading up on the Rust programming language, but a stray comment somewhere about the Nim programming language sent me off on a bit of a tangent. The thing that really got me interested in Nim was that it compiles to C, and I noticed that this brings quite some options for portability. One of the reasons why I like programming in C is that you can run the code on all kinds of machines, from tiny embedded devices to supercomputers. But that does come at a cost, because it takes longer and you often have lots more typing to do because of all that boilerpate stuff.

But it looks like Nim would allow you to be quite productive, whilst the resulting executables should still be efficient and fast. To get some ideas, I went off to Rosetta Code and found the Nim code for a simple webserver which looks like this:

import asynchttpserver, asyncdispatch
 
proc cb(req: Request) {.async.} =
  await req.respond(Http200, "Hello, World!")
 
asyncCheck newAsyncHttpServer().serve(Port(8080), cb)
runForever()

Being able to create a webserver with just a few lines of code, and with the potential for the code to be portable and run on all kinds of devices seemed very tempting! By this point, I'd forgotten about Rust and wanted to explore Nim a bit further.

So whilst Nim has not reached v1.0 yet, it certainly looked very compelling. The compiler documentation made it look like cross-compilation was not that difficult, so I decided to try and cross-compile some code for a router running OpenWrt. Before going off and spendng a lot of time learning a new language, I wanted to see that I *really* can write portable code. I was very happy to see that in just a few minutes I had the example webserver code running on my TP-Link WR740N, as shown here:

To my amazement and joy, this really wasn't that hard. After installing Nim, I had to edit Nim/config/nim.cfg to point to my cross-compile toolchain, so I ended up with these lines:

mips.linux.gcc.exe =
 "/home/openwrt/openwrt/staging_dir/toolchain-mips_34kc_gcc-4.8-linaro_uClibc-0.9.33.2/
bin/mips-openwrt-linux-gcc"
mips.linux.gcc.linkerexe =
 "/home/openwrt/openwrt/staging_dir/toolchain-mips_34kc_gcc-4.8-linaro_uClibc-0.9.33.2/
bin/mips-openwrt-linux-gcc"

Then, after doing that I simply needed to pass the correct parameters to the Nim compiler, like this:

nim c --cpu:mips --os:linux webserver.nim

Which cross-compiled the example webserver ready for it to run on my TP-Link WR740N. That actually seems pretty awesome. I think that I need to learn more, perhaps I'll even go and buy the Nim in Action book. All this stuff worked perfectly without any trouble, which is pretty rare, so I am left feeling very impressed so far.

"Programming Projects In C..." book

It's good to see that some new C programming books still get published. These days they are quite rare, but it's great to see when one does appear. At the moment I am enjoying this one:

Programming Projects In C For Students Of Engineering, Science, and Mathematics.

I bought mine when I found a copy whilst browsing the Cambridge University Press bookshop. As I flipped through the book in the store I stumbled on the chapter about makefiles, and realised that it was going to be good there and then. I liked the iterative way that the makefiles were presented - as a series of layered enhancements.

It even motivated me to go away and improve some of my makefiles, which were already working fine ... but could be done even better. The advantage is that now I have more of a chance to reuse them on other projects.

But now, I've had an opportunity to read more of this book, and have really enjoyed it. There are lots of little gems tucked away which would be worth taking note of. I reckon that most C programmers would find something interesting in this book, not just developers working in the fields of science, mathematics and engineering.

The C programming language has been around for a long time now, I think that even the author admitted to using certain blocks of code for 30 years or so. But for me, that's part of the beauty of it. There are not many languages where you could have written a block of code decades ago and find that it's just as good today.

If you're doing C programming and are curious to see how somebody else uses the language then you'll probably enjoy reading this book.

Remote monitoring Raspberry Pi 2 CPUs

I wanted to visualise of all those CPU cores on my new Raspberry Pi 2, so I decided to build a remote CPU monitoring webserver. The idea is to serve a webpage showing the percentage use of the CPU cores in a nice scrolling graph. Another idea I wanted to try was to keep all the HTML and JavaScript files embedded in the executable. So you only need the single binary program file. It means you just run the program without having to worry about putting supporting files in a particular place.

I'm hopeful that because it's lightweight and written in C the program itself won't consume many of the Raspberry Pi's resources. If you wanted, you could probably run the program as a service when the machine starts and not notice any difference. The CPU data is sent from a minimal Web API, which is built into the webserver. It sends out the CPU percentages as an array in JSON format. This means that most of the work is done by the browser (all the plotting and scrolling) and on the server side we just need to send an array of numbers every second or so. And whilst I was at it, I also included the ability to monitor the core temperature (if it is running on a Raspberry Pi).

I have put the resulting project on GitHub, so if anybody wants to have the ability to monitor their CPU cores over their network they can try it. I expect it will work on other Linux machines too, it's not restricted to the Raspberry Pi. The number of graphs is dynamically updated, depending on the number of cores you have.

This is how it looks (in this case showing the graphs on a simulated iPhone 4s):

If you want to, you can try it out like this:

git clone https://github.com/davidsblog/rCPU
cd rCPU/rCPU
make
sudo ./rcpu 80

...which will only work if you're not running an existing webserver on the machine, otherwise substitute the 80 on the last line for a different port. When the server is running, simply point a browser at your machines IP address (and port if its different than 80) and enjoy.

I did notice that the graphs don't plot very nicely on the default browser included with Raspbian (although if you install an alternative like Chromium it should be OK). But since the purpose is to monitor the CPUs remotely this should not really be an issue.

Another lightweight webserver for the Vocore

Qin Wei, the inventor of the Vocore has written his own lightweight web server. But I haven’t tried it out personally, because I have been messing about with my own minimal webserver as well.

Actually, one of the reasons why I have been working on my own web server is so that I could run it on minimal hardware - the Vocore being a good candidate. So it goes without saying that I have been itching to try that out.

I already did a trial run some months ago, where I got it running on OpenWrt for some different hardware. So the procedure for the Vocore is pretty much the same. I’ve made it slightly easier, because the makefile now has a target called ‘openwrt’ which will do the cross-compilation. I guess it should work for whatever device your OpenWrt build environment has been configured for, but I’ve only tried it for the Vocore.

Anyway, I’ve just got to the point where I have had time to try it out. I still needed to install the POSIX thread library, by using these commands on the Vocore:

opkg update
opkg install libpthread

And after that I was able to run dweb without any problems, like this:

Even the ajax calls back to the Vocore were working, which means that I could build something to control the GPIOs through javascript in the browser, if I wanted. But in the meantime, I decided trying to serve up some of the system statistics, so I did this:

cd /dweb
ln -s /proc/stat stat.txt

Which makes the contents of /proc/stat available to the webserver, and meant that I could browse to /stat.txt and view the contents of /proc/stat in my browser. I thought that was an interesting little experiment.

LED blink program for Vocore + Dock

Having gotten one of the Vocore’s LEDs to blink from the command line, I decided that a simple C program to do the same thing would be useful, just to make sure that the cross-compiler has been set up correctly and that everything is working properly.

Having a program that just blinks an LED is pretty useful, just to verify that you can successfully run your code. If you’ve managed to cross-compile a test program like this, then copy it to the target hardware and see it working - then you can start building whatever you like with a degree of confidence.

So I’ve put the code for the Vocore blink program on GitHub just in case… NOTE: it’s designed to work without any additional electronics if you have the dock attached.

The project includes a simple Makefile which (hopefully) should work in most cases. As long as you have an OpenWrt build environment set up for the Vocore you should just be able to type make and you will then have the program cross-compiled and ready to be copied onto the device. Doing something like: scp ./blink root@192.168.61.1:/ should copy the program over to your Vocore. Once you’ve done that, you just need to ssh into the Vocore and run it.

Oh, you’ll probably need to make sure that the program has been marked executable with chmod, otherwise you’ll probably get a Permission denied message. I used chmod +x blink to do that.

Linux on Microsoft Azure

I've recently been messing about with Microsoft's Azure platform; and I already think it's brilliant. I didn't expect the support for Linux Virtual Machines to be quite so good. Literally with just a few clicks I created a Linux box on the cloud (I went for Ubuntu), I can't really see it getting much easier. I suppose I should have known when I recently saw that Microsoft's CEO Satya Nadella had presented this:

Anyway, the first thing that I did with Azure was test my Linux VM out by copying over my C webserver code and compiling it. Everything went perfectly. But I quickly realised that it would be useful to be able to leave my code running in another terminal session when I disconnected from the console. That's where I tried out GNU Screen which allowed me to start my program in a session and leave it running after I had disconnected. GNU Screen is a nice simple solution, which means that I can keep my code simple. I like this :-) And at the same time I noticed a couple of bugs in the webserver code which I had not spotted before, so I was able to fix them too.

It's great to think that I can write some C code on my MacBook; cross-compile and test it on a small router running OpenWrt, then move it to the Azure platform without any worries. This is one reason why I still like programming in C (and Linux).

Next I decided to copy my H2D2 server across to Azure (of course). Again, everything worked perfectly. So I'm currently testing H2D2 on the cloud, let's see how stable it is. I also took the opportunity to tidy up the html, to make it look more presentable as well:

For some reason it came out in grey. I don't really know why.

Since I'm expecting to see a few crashes, I have used a simple shell script like this one which will restart my server program when something goes wrong.

Originally, I started with the most basic type of VM on offer, with a shared core. But then I did an upgrade to an 8-core machine, which gave a noticeable performance boost (as you'd expect). It was nice to see that Azure can upgrade your VM really easily, yet again it was all done with a couple of clicks (and a reboot of the VM). But now, to save CPU I've gone back to the shared core again which will be less of a drain on my account.

Running dweb on OpenWrt

To take OpenWrt for a test drive, I have been messing about with one of these TP-Link WR740N routers. It gives me a small low-power Linux box, which I’m sure I will find many uses for. When you consider that this £30 router comes in a consumer style plastic case with a power supply and even an ethernet cable, you realise that it’s a pretty good deal. It can’t do all the things that a Raspberry Pi can do, but it still has potential. And I also found this link which shows I’m not the only one.

So after getting the cross compiler set up (which I did using a Debian machine running inside VirtualBox) and getting some simple C programs running (flashing the LEDs), I decided to try something a little more interesting.

Despite the fact that OpenWrt seems well catered for in the web server department (it can run uHTTPd which supports SSL and even embedded Lua scripts), I still wanted to try my own web server code, out of curiosity, and just because I can.

So I decided to cross compile my own web server, called dweb for OpenWrt and try it out. I’m pleased to say that it worked without any modification. The only thing that I needed to do was install the POSIX thread library onto the router, using this command:

opkg install libpthread

…and after doing that I could run dweb. My simple web based API example which uses jQuery ajax to post values worked perfectly.

So now I can write code in C and expose it as a web-based API from my router. The problem is, I don’t really know why I’d want to do that. But in any case I can. At the moment it’s a solution looking for a problem. But it’s been fun.

What I wanted to achieve with dweb was to write a small webserver in portable C code and without needing external dependencies. I suppose that using the POSIX thread library would be considered a dependency, but the dweb source code allows you to turn that off.

So hopefully, this means that dweb will run on one of these VoCore one inch Linux machines which would be pretty cool. I am thinking of what I could do with a tiny webserver running an API which exposes the GPIO pins over http. Perhaps I’ll build that home sensor network I’ve been wanting ;-)

dweb: a lightweight portable webserver in C

I've been continuing to experiment with the source code to 'nweb' (a minimal web server written in C). In fact, I've probably experimented so much that I've changed most of the code beyond recognition by now. But I'm still calling my version 'dweb' in honour of the original.

Since the C# webserver I wrote ages ago has proved quite popular, I thought that I'd try something similar in C. I realised that I should be able to produce a lightweight little web server, which could also be a starting point for some other things I have in mind. Since I'm not using any external libraries, there are no dependencies to worry about. Well, OK I have used the POSIX thread library, but it can be taken out by changing a #define.

I'm attempting to make something which can easily be adapted, where you simply need to write a function which assembles your html response and everything else is done for you. So I'm making use of C function pointers for that. All you need to do is write your custom "responder" function (using the correct signature) and then call my webserver function, passing in a pointer to your own responder function.

That way, by adding a single C file to a project and writing a few lines of code you can have an embedded web server - albeit a very simple one of course.

Obviously I'm not attempting to write a fully featured web server platform, but this is the type of hobbyist project you could run on a Raspberry Pi to create a very simple web interface for something, whilst keeping it very lightweight and low on dependencies. It might work on different types of embedded Linux too (I'd be quite interested to know if it does).

But for my purposes, it will be an easy way to provide a simple web based API, where I can send commands with an http POST or just retrieve data by sending a GET.

But my main aim was to make a custom webserver possible in C with just a few lines of additional code. Being portable, it should run on Mac OS X, Linux (including Android) and even Windows if Cygwin is installed. It certainly worked OK on the Raspberry Pi and I've also used it on my Android tablet.

Since I've included support for HTTP POST (which nweb didn't have) it means that simple form posting scenarios will work, and Ajaxy type stuff should even work. The example code I've written allows HTML form values to be submitted. It also includes a simple Ajax call where a parameter is sent to the server and a return value comes back in the response.

So how does the code look? Well, here is the simplest (trivial) implementation:

void test_response(struct hitArgs *args,
   char *path, char *request_body, http_verb type)
{
    ok_200(args,
        "<html><head><title>Test Page</title></head>"
        "<body><h1>Testing...</h1>This is a test response.</body>"
        "</html>", path);
}

int main(int argc, char **argv)
{
    if (argc != 2 || !strcmp(argv[1], "-?"))
    {
        printf("hint: dweb [port number]\n");
        exit(0);
    }
    dwebserver(atoi(argv[1]), &test_response, NULL);
}

Not too bad - but it just gives the same response to every incoming request. The full source code includes more advanced examples showing an HTML form posting back some values. It also shows an Ajax call using jQuery.

Like 'nweb', it started out as a multi-process server, where each request spawned a new process using fork(). But I am now including support for single-threaded and multi-threaded modes as well, the default being multi-threaded.

Anyway, the source code is on GitHub, if you're interested (released under the MIT license). Meanwhile, I'll keep tinkering with it...

Running 'nweb' on Mac OS X

For some time I've admired the simple web server code called nweb, which can be found here. It's written in C and shows how you can build a nice simple little HTTP server without scary amounts of code. The code even runs on the Raspberry Pi.

It's written for Unix and Linux systems, but I wanted to see if it would work on my MacBook (which runs Mac OS X, obviously). Since Mac OS X has a Unix heritage, I hoped it would work without too much trouble...

So I fired up Xcode and created a blank "Command Line" application in C, then I just pasted in the nweb source code. It gave just one compilation error.

All I needed to do was replace SIGCLD with SIGCHLD in one line of code. Then it worked! Nice.

So that's awesome, a very handy little command line web server for Mac OS X.

I'm sure I will find a use for that. The old brain cogs are whirring as I type this in fact.

Extra speedy H2D2 on the Mac

I remember there was a time I was very happy because H2D2 was running my Mandelbrot test program in about 650 milliseconds (whereas the original proof of concept, DALIS, took nearly 12 seconds to do the same thing).

Since moving over to my new MacBook and compiling the H2D2 source using Xcode, just look at the result now:

Whoa! 72 milliseconds! Now that's impressive. This new MacBook is pretty fast, but I'm sure the Apple LLVM compiler is also playing its part in achieving that level of speed. I have done some optimisation work in H2D2 recently, but I don't think I can take the credit this time.

It really pays to do this type of 'system level' programming in C (plus I enjoy it of course). Right, I'm off to work on parsing string expressions now...