How to setup mDNS lookups on the Raspberry Pi

I've got the new Raspberry Pi 2, and was setting it up the other day. The first thing that annoys me with vanilla Debian installations is that they don't have mDNS/zeroconf/avahi enabled by default. This technology is very useful, since it helps you to advertise services on you local network, lets you resolve host names without the need for setting up a DNS server and much more.

Especially the convenience of not having to remember IP-addresses for your machines is worth the work to set this up. With DHCP you might not even get the same IP for every machine every time.

For this to work, I asked a question over at the Linux & Unix StackExchange. So parts of this blog entry are taken from there.

First, you might want to install avahi on your RasPi;

sudo apt-get install avahi-daemon

This should help with the Pi being resolvable by name from other machines -- which also have to support mDNS. For example Macs will come with mDNS-lookup enabled. So you should be able to ping your Pi just by using its name plus the local-domain:

ping raspberrypi.local

Next, you want to install the client side name service support for mDNS:

sudo apt-get install libnss-mdns

Make sure that the /etc/nsswitch.conf contains this line:

hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4

There is probably already a line starting with "hosts:", which you can simple comment out with the #-sign.

For added convenience, you may want to add the sshd to the advertised services of avahi. Simply add a file /etc/avahi/services/ssh.service containing the following lines:

<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
  <name replace-wildcards="yes">%h</name>

This should let you use mDNS on you Pi, see the advertised services on you other machines in the local network. If you are using Plex Media Server (see this great post), it will also utilize avahi to advertise its services.


Autocompletion for C and C++ with Emacs 24

I have already done a blog post on auto completion using Emacs. But that was back in Emacs 23 days. Long ago...

Since then a lot has happened. Emacs 24 has been released, package managers like MELPA or ELPA have become standard, and company-mode seems to be winning against auto-complete. Also, clang has made huge strides forward.

So it is time to revisit the task of developing C or C++ using Emacs. I have put online an easy-to-install Emacs init.el that you can use as a start for your own development environment. I am using the OS X version of Emacs, but this should also work on Linux, given that you have clang and git installed.

You can install this init.el by issuing the following commands:

git clone https://github.com/root42/yet-another-emacs-init-el
mkdir ~/.emacs.d
cd ~/.emacs.d
ln -snf ../yet-another-emacs-init-el/init.el .

The file will make sure that upon startup of Emacs all necessary packages are installed, like company, magit but also LaTeX tools like aucTeX or refTeX. You can disable this check (or individual packages) in the init.el.

The code is up on github, so feel free to fork and/or contribute.

When doing simple C++ programming using only standard libraries, you should be ready to go. For more complex projects, you have to tweak the clang parameters, so that the compiler will find the header files. Completion happens automatically after ".", "->" and "::" but also when pressing M-/. You can rebind this key in the init.el, of course. This is what the completion looks like:

My most commonly used keys that I have re-bound are as follows:
  • f3 - Runs ff-find-other-file, trying to switch between header and implementation for C/C++ programs.
  • f4 - Toggles the last two used buffers.
  • f5, f6 - If tabbar is enabled (tabbar-mode), navigates back/forward through tabs.
  • f7 - Toggle ispell dictionaries (german/english).
  • f8 - Kill current buffer.
  • f9 - Run compile.
  • M-? - Run grep.
  • M-n - Go to next error in compilation buffer.
  • M-S-n - Go to previous error in compilation buffer.
  • M->, M-< - Go to next/previous Emacs frame.
  • M-/ - Run autocompletion using company mode.
  • C-x o, C-x C-o - Go to next/previous Emacs window
Very nice is also the magit-mode, which is a very sane interface to git for Emacs. It looks like this:
You can run it by executing M-x magit-status. Just type ? to get online help. Magit uses simple one-character-commands, like s for stage, c for commit, p for push and so on.

Currently I am evaluating the integration of lldb into Emacs, but haven't come far enough to say that I have found a powerful and flexible interface, apart from the standard command line. So there's more to come, hopefully!


How to view clojure docs in Emacs Cider

Developing clojure with Emacs Cider is great. To be able to view summaries for clojure functions and special forms, type this in you Cider REPL:

(use 'clojure.repl)

Afterwards, you can do stuff like this:
my.repl> (doc conj)
([coll x] [coll x & xs])
  conj[oin]. Returns a new collection with the xs
    'added'. (conj nil item) returns (item).  The 'addition' may
    happen at different 'places' depending on the concrete type.


How to automatically refresh Cider when saving a clojure file in Emacs

Emacs has a great mode for using Clojure called Cider. Cider comes with an interactive REPL. The REPL allows you to test your code, start web apps, if you are using Luminus, and all in all accelerates  development. One annoying thing though is that you have to refresh Cider every time you saved your sources. The good thing is that you can do this automatically. Just add the following to your init.el. I took the function from a blog post by Kris Jenkins:

  (add-hook 'cider-mode-hook 
     '(lambda () (add-hook 'after-save-hook 
      '(lambda () 
         (if (and (boundp 'cider-mode) cider-mode)

  (defun cider-namespace-refresh ()
     "(require 'clojure.tools.namespace.repl)

  (define-key clojure-mode-map (kbd "C-c C-r") 'cider-namespace-refresh)

Additionally, you can now run the refresh command manually, by hitting C-c C-r.

Update: I changed the after-save-hook to only trigger if we are in a buffer that has cider-mode enabled. Otherwise every save command in Emacs would have triggered the refresh!


Forwarding emails using fetchmail and msmtp

My goal here was to forward emails from my GMX freemail account to my iCloud account. Up until now, I used GMX's own forwarding capability, which is a bit hidden in the filters settings. However, iCloud cranked up its spam filtering, and is now using spamhaus blacklists, which very often label the GMX forwarding servers as bad.

Hence I would only get bounce mails instead of the actual mails. Since this is no good, I set up fetchmail and msmtp on my root server to do the forwarding for me. First, fetchmail will get all mail on GMX via POP3 and pass it on to msmtp, which will in turn pass it on to the iCloud mx server.

At first I tried to deliver it via authenticated SMTP, but iCloud refuses mails sent this way, if the header from field does not contain any of your own iCloud aliases. This will most of the time be a problem, since we are trying to forward emails that were sent to you, not sent from you.

So first let's see the ~/.fetchmailrc (make sure to chmod 0600 it):

poll pop.gmx.net
with proto POP3
user "user@gmx.net"
there with password "secretpassword"
mda "/usr/bin/msmtp -- someuser@icloud.com"
no keep
sslcertpath /etc/ssl/certs
set daemon 300

This will poll GMX every 300 seconds and pass the received mails to msmtp for delivery to someuser@icloud.com.

The corresponding ~/.msmtprc looks like this:

account default
host mx6.mail.icloud.com
port 25
auto_from off
from "user@localdomain"
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile ~/.msmtp.log
domain mx.of.localdomain

You can find out the valid mx entries for iCloud by running nslookup -type=mx icloud.com.

The settings above are assuming Debian stable. Other distributions or operating systems may have the SSL certs at different places in the file system.


hal+json is the way to go for representing REST resources

If you are implementing REST APIs, and are thinking about using application/json as your content type, please consider application/hal+json. It allows you to represent link relations and embedded resources in a standardized manner.

The content type offers three things:

  1. Semantic link relations using the _link key. The self link is a good example.
  2. Embedded resources using the _embedded key, which are a subset of the link relations
  3. Properties with arbitrary keys

As an example, you might have a programmable power strip with three sockets. The strip itself can be modeled as a resource. It will have three optionally embedded resources with the link relation sockets. These resources themselves will have a link relation called toggle. Doing e.g. a POST on this resource will allow you to toggle it. The lovely thing here is that the API will give you the correct URL to turn on or off the socket. An example request and response:

GET http://foo.com/powerstrip/1
Accept: application/hal+json

    "_links" : {
 "self" : { "href" : "http://somedomain/powerstrip/1" },
 "sockets" : [
     { "href" : "http://somedomain/powerstrip/1/sockets/1", "name" : "Socket 1" },
     { "href" : "http://somedomain/powerstrip/1/sockets/1", "name" : "Socket 2" },
     { "href" : "http://somedomain/powerstrip/1/sockets/1", "name" : "Socket 3" }
    "_embedded" : {
 "sockets" : [
  "_links" : {
      "self" : { "href" : "http://somedomain/powerstrip/1/sockets/1" },
      "toggle" : { "href" : "http://somedomain/powerstrip/1/sockets/1/off" }
  "state" : "on"
  "_links" : {
      "self" : { "href" : "http://somedomain/powerstrip/1/sockets/2" },
      "toggle" : { "href" : "http://somedomain/powerstrip/1/sockets/2/on" }
  "state" : "off"
  "_links" : {
      "self" : { "href" : "http://somedomain/powerstrip/1/sockets/3" },
      "toggle" : { "href" : "http://somedomain/powerstrip/1/sockets/3/on" }
  "state" : "off"
    "numberOfSockets" : 3,
    "voltage" : 230

The content type is described in an RFC draft, and is well on its way to become a standard. Make sure to also read the associated web linking RFC.

Amazon is already using this content type in its AppStream API.

The strength of using link relations and a content type such as HAL is that you can actually document your link relations, which are a fundamental part of your API. You should actively design the link relations and make them meaningful.

For resources, I recommend that you document the following aspects:

  1. Expected link relations and their embeddedness
  2. Attributes of the resource
  3. Example method calls for all allowed methods (GET, POST, ...) with example responses 

For link relations you should document these aspects:

  1. Synopsis what the link relation means or represents, and which resource is to be expected
  2. Allowed methods with optional templated arguments
I place the example method calls with the resource documentation, since they might be redundant if specified with the link relations. But you should link to the resource documentation from the associated link relation documentation.


Extracting Names from Email Addresses

Given a CSV file with the following format:
The task is to extract the names from the email addresses. We assume that the names are seperated by periods (.) and that all the names are supposed to be capitalized and printed with strings:
#!/usr/bin/env python

import sys

if len( sys.argv ) < 2:
    print "Usage: %s filename" % sys.argv[ 0 ]
    sys.exit( 1 )

textFileName = sys.argv[ 1 ]
textFile = open( textFileName, "r" )

for line in textFile:
    fields = line.strip().split( ';' )
    email = fields[ 2 ].split( "@" )
    emailName = email[ 0 ].split( '.' )
    capitalizedName = [ x[:1].upper() + x[1:].lower() for x in emailName ]
    print '%s;%s;%s' % ( capitalizedName[ 0 ], ' '.join( capitalizedName[ 1: ] ), fields[ 2 ] )