software engineering

Installing Node.js on a Raspberry Pi

By Adam K Dean on

Raspbian is based on of Debian Wheezy, so things are a little different than the standard Ubuntu 14.04 installs. Remember to always read a script before you curl it to bash.

curl -sLS | sudo bash
sudo apt-get install node

Now check it's running okay...

pi@raspberrypi ~ $ node -v

Ok so it's a little behind but still on 0.12.X.

Notes on Raspberry Pi & Arduino

By Adam K Dean on

Using NOOBS, install Raspbian. It's probably the most supported distro for the Pi. (

Setup the Raspberry Pi to connect automatically to WiFi. I've used a TP-Link TL-WN321G without any issues. (

I'm using an old pre-2011 Arduino Uno. It's firmware version is actually 0.00, but I was able to get it working with Firmata. Firmata is a library which enables communication between host and arduino. It allows you to use JavaScript frameworks such as to control your arduino with Node.

You only need to put Firmata on the arduino once, so I did it on my MacBook. After this, the Arduino just starts up ready to communicate. No more programming required. Tethering is required, as the host will now wear the trousers.

First, download the Arduino IDE. On OS X, brew cask update && brew cask install arduino. Once installed, run it, make sure the arduino is connected via USB. Make sure the correct board and port are seleted in the IDE. Go to File, Examples, Firmata, and then StandardFirmata. Upload this to your board. Now you're set.

Let's quickly test it. Using Node, install johnny-five. Then stick an LED in Arduino pins 13 and GND. Then run the hello world blink code:

var five = require("johnny-five"),
    board = new five.Board();

board.on("ready", function () {
    var led = new five.Led(13);

The LED should blink. If it doesn't, it's time to get your Google on.

Moving on, we want to control the Arduino via the Raspberry Pi. For this, you need to manage your power consumption properly.

Step 1. Power on raspberrypi with WiFi dongle connected. Wait for it to connect to network.
Step 2. Start a continuous ping of the raspberrypi to check it's connectivity.
Step 3. Power on your arduino with an external power supply.
Step 4. Plug in the USB into the raspberrypi.
Step 5. Plug in the USB to the arduino.

I don't know for sure, but I think that by powering on the arduino with external power first and then connecting it via USB, it disables the USB power consumption, which stops your raspberrypi from throwing a fit.

Next article will cover: johnny-five arduino code/setup

Access boot2docker container IP

By Adam K Dean on

When you run boot2docker, all your containers will be running on that VM, not on your local machine. Therefore, you won't be able to access them via their container IPs by default.

To access them, ensure that boot2docker is running, and run:

$ sudo route -n add `boot2docker ip`

This tells your machine to direct all traffic on the docker IP subnet to the IP address that your boot2docker is running on.

Discovering service discovery services

By Adam K Dean on

The problem with service discovery, is that in order to use it to discover services, services must first discover the service discovery service.

Docker assigns IP addresses to containers dynamically, so you cannot guaranteee on knowing where exactly a service will be. You don't want to passing the docker socket through to containers either, because that is a Bad Thing.


So the question is, where do you start? You have to start somewhere, you have to have some sort of anchor point that you can gather the rest of your information from. I'm using Consul for my service discovery, and running it inside a container, so this is where I want to start. I want services to be able to talk to Consul and discover other services, but the start point has to be Consul.

I thought about some sort of DNS so I can resolve consul.local, but the problem only shifts to the DNS server. I thought of a docker lookup service, where I can ask for a name and it gives me the containers 172.17.0.??? IP, but again, that simply shunts the responsibility over to the docker lookup application which is another moving part that can break.


The solution was to create a separate virtual network interface which only the Consul services would use. I could then statically assign an IP to the nodes when they start and those IPs would be accessible by any service on the network.


To test this, I created the interface (see below for a permanent solution):

sudo ifconfig eth0:1 netmask up

Then I ran a hello-world app bound to that IP:

docker run --rm -t -i -p tutum/hello-world

I then navigated to and saw the hello world page. This is good, it means eth0:1 works. I wasn't able to create a docker0:1 interface without issues. As of yet I'm not sure why.

The next step is to launch Consul bound to that IP and then have other services connect to Consul now that the IP is known. For Node.js, I'm using node-consul. Once it knows the Consul server location, it should be able to pull all the relevant services.

To start Consul:

# This is only an example snipped from a larger file

readonly NAME="consul-node1"
readonly HOSTNAME="node1"
readonly NETADDR=""

docker run -d -h $HOSTNAME \
    --name $NAME \
    -p $NETADDR:8400:8400 \
    -p $NETADDR:8500:8500 \
    -p $NETADDR:8600:53/udp \
    progrium/consul -server -bootstrap -ui-dir /ui

Now you can connect to knowing that Consul is there.

A more permanent network interface

For a more permanent network interface, edit /etc/network/interfaces and drop in the following at the bottom:

# Private network for Consul services
auto eth0:1
iface eth0:1 inet static

Then bring up the interface:

$ sudo ifup eth0:1

Always check that the interface came up ok, as ifconfig/ifup/ifdown are very strange beings who do what they want, when they want, usually in a way you don't want.


This is less of a structured post and more of a notes-as-I-go format post. Please don't follow these instructions like they will build you a system. They will not. But what they might do is give you an idea as to how you can work certain things. Most of all, they will serve as a great resource for future me.

UFW, Ubuntu, OpenVPN, and Docker

By Adam K Dean on

Today I've been playing with UFW, Ubuntu, OpenVPN, and Docker. A few other things as well but this post relates to those technologies, mainly UFW, Ubuntu, and Docker.

In case you didn't know, like I didn't know, when you expose ports, Docker thinks a great idea would be to put some rules into iptables to allow the traffic to pass through. I'm sure there is a good reason for this, but you may find that when you enable UFW, traffic still gets through. Now that I understand what is going on, it all makes sense, but getting to that point has taken up my evening.

I followed this tutorial on how To Run OpenVPN in a Docker Container on Ubuntu 14.04 which was a breeze. Really highlighted the strengths of Docker. This setup assigns you an IP from

My next step, now that I had VPN access, was to shut down eth0 on the machine, allowing through only ports 22 & 1194/udp. What I didn't know was that Docker was going to be a royal pain in the arse during this time. It would continue to bypass my rules and make me wonder what the hell was wrong with UFW. Nothing was, it was Docker playing with iptables.

To stop Docker from doing this, you have to start the Docker daemon, or Docker engine I think it's now called, with the --iptables=false flag. To do this, on Ubuntu 14.04, open up /etc/default/docker in your favourite text editor and add the following line:


Save that, and then restart the Docker daemon/engine/server thing:

sudo restart docker

Now, I'm not sure what happens here with existing containers, but I went ahead and deleted them and started fresh. Now when you start new containers, there won't be crazy rules bypassing UFW.

For UFW, I had to add a few rules. I added in ports 22 (SSH) and 1194/udp (VPN). I also added a rule to allow all traffic from docker0:

sudo ufw allow 22
sudo ufw allow 1194/udp
sudo ufw allow in on docker0 to any
sudo ufw enable

Next, I started some containers, and tried to access them. No access... turned on the VPN, tried again, and bingo. I had access. It took a while to get there but I got there. Secured access to my containers.

Now I need a long, hot bath to relax. Thanks a lot Docker!