If you main use of GDB is for debugging embedded devices you can't really go too long without encountering the GDB remote protocol. This is the protocol used to communicate between the GDB application and the debugger, usually over either TCP or serial.
Although this protocol is documented, it is not always clear exactly when which packets are actually used and when. Not knowing which packets to expect makes implementing the debugger side of things a little tricky. Thankfully there is a straight forward way to see what is going on:
$ set debug remote 1
If you are developing stuff on ARM Cortex M-series devices and need a reasonably priced debugger, I'd have to recommend the Black Magic Probe. For about 70 bucks you get a pretty fast debugger that directly understands the GDB remote protocol. This is kind of neat as you don't need to have run some local server (e.g.: stlink); the debugger appears as a standard CDC ACM USB serial device.
The thing about it which is pretty cool is that the debugger firmware is open source, so when you suspect a bug in the debugger (which, just like compiler bugs, does happen in the real world) you can actually go and see what is going on and fix it. It also means you can add features when they are missing.
Although the debugger has some support for ARM
semihosting, unfortunately this support is not comprehensive,
which means if you use an unsupported semihosting operation in your
code you end up with a nasty SIGTRAP
rather than the
operation you were expecting.
Unfortunately one of the simplest operations, SYS_WRITEC
,
which simply outputs a single character was missing, which was
disappointing since my code used it rather extensively for debug
output! But one small
commit later and the debug characters are flowing again. (As with
many of these things, the two lines were the easy bit, the hardest and
most time consuming bit was actually installing the necessary build
pre-requisites!)
My Python EXIF parsing library is joining the slow drift of projects away from Google code to Github. If you are intertested in contributing please send pull requests and I'll attempt to merge things in a timely manner.
As you can probably tell from the commit log, I haven’t really been actively working on this code base for a while, so if anyone out there feels like maintaining, please just fork on github, let me know, and I’ll point people in your direction.
Abstract: This article provides a short explanation of how I fixed a problem I was having with node. In doing so it provides an introduction in to some of the node’s internal architecture.
There are times when running a node webserver where, rather than
opening a network socket yourself, you want to use an existing
socket. For example, I’ve got a simple loader which does some
minimal setup, opening a socket and binding to a port, before
exec()
-ing another server (in this case a node
server). In pseudo-code it looks something like:
s = socket(...); bind(s, ...); setuid(unprivileged_uid); exec(...);
The goal behind this is to acquire some privileged resources (such as port number less than 1024) while running as root, and then dropping privileges before executing a larger, less trusted, code base. (I’m sure there are other potential approaches for achieving a similar kind of goal, that isn’t really what this post is about).
Anyway, if you use this approach, when calling a createServer
API, rather than providing an address
and port
when listening, you can provide a file descriptor. Effectively what
happens is that the code operates something like:
if (fd is null) { fd = socket(...) bind(fd, ...) } fd.listen()
This pattern worked really well for me on node around the time of
version 0.4. As time passed and I upgraded from 0.4 through to 0.8
this approach appeared to continue working real
well. Unfortunately, not appearances can be deceiving! It wasn’t until
much later when trying to log an request’s remote address that I
started to run into any problems. For some reason despite everything
mentioned in the docs al attempts to derefence the
remoteAddress
property resulted in failure. It’s at this
time that open source (or at least source available) libraries really
shine, as it makes debugging really possible. A few dozen
printf
s later (well, OK, a combination of
console.log
and printf
), and I was able
to rule out any of my code, and any of express.js
and any of the
node http.js
and https.js
libraries.
The problem was that my socket object didn’t have a
getpeername
method, which is weird, because you expect
sockets to have a getpeername
method. The reason that the
socket object didn’t have a getpeername
method, is that
it wasn’t actually a socket. Well, unfortuantely things
aren’t quite that simple. There are a number of layers of abstraction
in node, that make some of this more difficult to understand!
The top level of abstraction there is a javascript object that is
based on node’s Socket
prototype. The Socket
prototype provides a number of userful methods; most of the things
that you would expect from a socket: connect
, read
write
, etc. It also defines the property we are most
interested in: remoteAddress
. Now this is a really simple
property:
Socket.prototype.__defineGetter__('remoteAddress', function() { return this._getpeername().address; });
As you can see, this property defers all the real work to another
method _getpeername
:
Socket.prototype._getpeername = function() { if (!this._handle || !this._handle.getpeername) { return {}; } if (!this._peername) { this._peername = this._handle.getpeername(); // getpeername() returns null on error if (this._peername === null) { return {}; } } return this._peername; };
Now, ready the code for _getpeername
it should be
clear that the Socket
object is primarily a wrapper for
an underlying _handle
. After exporing the code in
net.js
some more it becomes clear that
_handle
could be one of two different things: a
Pipe()
or a TCP()
object. These objects are
implemented in C++ rather than Javascript. Now, reading the code for
these objects (in pipe_wrap.cc
and
tcp_wrap.cc
) it becomes clear that these two objects
have very similar implementation, however the TCP object provides a
few more features, critically it provides the getpeername
method, whereas the Pipe object does not. This is the key to the underlying
problem: my Socket objects have a Pipe object as the handle, rather than an
a TCP object as the handle. The next question is why!
The answer lies in the createServerHandle
function residing
in net.js
. This is the function that eventuallly is called when
the HTTP listen method is called. This has some code like this:
if (typeof fd === 'number' && fd >= 0) { var tty_wrap = process.binding('tty_wrap'); var type = tty_wrap.guessHandleType(fd); switch (type) { case 'PIPE': debug('listen pipe fd=' + fd); // create a PipeWrap handle = createPipe(); default: // Not a fd we can listen on. This will trigger an error. debug('listen invalid fd=' + fd + ' type=' + type); handle = null; break; ....
The upshot of this code is that if an fd
is specified, node
tries to guess what kind of thing the file descriptors refers to and creates
th handle based on this guess. The interesting thing with the switch statement
is that it only has a single valid case: PIPE
.
This is interesting for a couple of reasons; it clearly isn’t hitting the default
error case, which means that node somehow guesses that my file descriptor is a
PIPE, which is strange, since it is definitely a socket. The kind of amazing thing
is that apart from the inability to retrieve the remote address, everything in
the system works perfectly. We’ll come back to this point later, but for now
the challenge is to try and find out why the the guessHandleType
method would think my socket is a pipe!
Again we get to follow the layers of abstraction
game. guessHandleType
is implemented in the
tty_wrap
function:
Handle<Value> TTYWrap::GuessHandleType(const Arguments& args) { HandleScope scope; int fd = args[0]->Int32Value(); assert(fd >= 0); uv_handle_type t = uv_guess_handle(fd); switch (t) { case UV_TTY: return scope.Close(String::New("TTY")); case UV_NAMED_PIPE: return scope.Close(String::New("PIPE")); case UV_FILE: return scope.Close(String::New("FILE")); default: assert(0); return v8::Undefined(); } }
So, this wrapper function is really just turning results from an
underlying uv_guess_handle
function in to strings. Not
much interesting, although it is somewhat interesting to know that at
one layer of abstraction things are called NAMED_PIPE
but simply PIPE at another. Looking at
uv_guess_handle
gets more interesting:
uv_handle_type uv_guess_handle(uv_file file) { struct stat s; if (file < 0) { return UV_UNKNOWN_HANDLE; } if (isatty(file)) { return UV_TTY; } if (fstat(file, &s)) { return UV_UNKNOWN_HANDLE; } if (!S_ISSOCK(s.st_mode) && !S_ISFIFO(s.st_mode)) { return UV_FILE; } return UV_NAMED_PIPE; }
Huh, so, from this code it becomes clear that if a file-descriptor
is a FIFO or a SOCK the uv library wants to treat the file-descriptor
as a UV_NAMED_PIPE
. This would seem to be the source of
the problem. Clearly named pipes and sockets are different things, and
it is strange to me that this function would force a socket to be
treated as a named pipe (especially when the upper layer treat the
two things differently). As an aside, this is the place where comments
in code are essential. At this point I have to guess why this is written
in this non-obvious manner. Some potential things come to mind: compatability
with the Windows version, expediency, a restriction on the API
to only return one of { TTY, UNKNOWN, FILE or NAMED_PIPE }. The other
odd things is that a macro S_ISREG
is also provided that could
test explicitly for something that is UV_FILE
, but that isn’t
used so things such as directories, character devices and block devices are
also returned as just FILE. Without comments it is difficult to
tell if this was intentional or accidental.
So the first fix on the way to solving the overall problem is to
update uv_guess_handle
so that it can actually return an
appropriate value when we have a socket and not a pipe. There are two
possible defines that this could be: UV_TCP
or
UV_UDP
. There is likely a some way to distinguish
between the two, however for my purposes I know this is always going
to be TCP, so I’m going to return UV_TCP
. Of course, I’m
sure this will cause someone else a similar problem down the track, so
if you know what the appropriate interface is, let me know!
Another approach that could be used is to leave this function alone entirely and provide some way of specifying the to listen and createServer whether the file-descriptor is a named-pipe or a socket, however deadling with all the parameter marshalling through the stack made this somewhat unattractive.
Once this is done, net.js
can then be fixed up
to create a TCP()
object, rather than a Pipe()
object in createServerHandle
.
Aside: I find it amazing that a 43-line patch requires aroudn 1400 words of English to explain the rationale and back-story!
Now we’ve fixed enough of the underlying libraries that we can think of fixing net.js. What we’d like to do is something like this:
var type = tty_wrap.guessHandleType(fd); switch (type) { case 'PIPE': debug('listen pipe fd=' + fd); // create a PipeWrap handle = createPipe(); break; case 'TCP': debug('listen socket fd=' + fd); handle = createTCP(); break; default: // Not a fd we can listen on. This will trigger an error. debug('listen invalid fd=' + fd + ' type=' + type); global.errno = 'EINVAL'; // hack, callers expect that errno is set handle = null; break; } if (handle) { handle.open(fd); handle.readable = true; handle.writable = true; } return handle;
This looks pretty good, but unfortunately it doesn’t work. That’s because
although the Pipe()
object supports an open
method,
the TCP()
object (which has a very similar set of APIs) does not
support the open method! How very frustrating. Now the implementation of pipe’s
open method is relatively straight forward:
Handle<Value> PipeWrap::Open(const Arguments& args) { HandleScope scope; UNWRAP(PipeWrap) int fd = args[0]->IntegerValue(); uv_pipe_open(&wrap->handle_, fd); return scope.Close(v8::Null()); }
That looks pretty easy to implement on the TCP object, however
we hit a problem when we get to the uv_pipe_open
line.
This function takes something of uv_pipe_t
type, where
as the handle in a TCP object has uv_tcp_t
type.
Dropping down another level of abstraction again, we need to add
a new uv_tcp_open
function in that takes a uv_tcp_t
parameter. It turns out that this is pretty straight foward:
int uv_tcp_open(uv_tcp_t* tcp, uv_file fd) { return uv__stream_open((uv_stream_t*)tcp, fd, UV_STREAM_READABLE | UV_STREAM_WRITABLE); }
The only frustrating thing is that this is almost an exact
duplicate of the code in uv_pipe_open
. Such egregious
duplication of code rub me the wrong way, but I don’t think there is a
good alternative.
A this point I do find it somewhat amazing that there are four levels of abstraction:
_getpeername
).uv_tcp_t
and uv_pipe_t
object. These have similar shaped APIs however the specific
functions names are distinct.I’m sure it is done for a good reason (most likely to provide compatability with Windows, where named pipes are sockets are probably different the kernel interface layer), however it is somewaht amusing that th eabstraction moves from the lowest kernel level, where both objects are just represetned as ints, and any function can be applied, up through two levels of abstaction that make each thing very different types, before ending up at a top-level where they are essentially recombined in to a single type.
In any case, after a lot of plumbing, we reach are able to make sure that when we listen on a socket file-descriptor, we actually get back socket objects that correclty support the remoteAddress property, and all is good with the world.
Error handling is a total pain no matter method you choose to use;
in Python we are more or less stuck with exceptions. When you have
exceptions if you want any chance of debugging program failures, you
want to see the stack-trace for any uncaught exceptions. Python
usually obliges by spewing out a stack traces on stderr
.
However, it isn't too hard to get in to a situation where you end
up losing those stack traces which ends up leading to a bunch of
head scratcing.
When you have a server, you usually run it daemonized.
When running as a deamon, it is not uncommon for any output to be
redirected to /dev/null
. In this case, unless you have
arranged otherwise, your stack traces are going to disappear into
the ether.
When you have a server style program, you definitely want to be using the Python logging system. This lets you output messages to logfiles (or syslogd). So ideally, you want any stack traces to go here as well.
Now, this is fairly straight forward, you can just make
sure your top level function is wrapped in a try/except
block. For example:
try: main() except: logging.exception("Unhandled exception during main")
Another alternative is setting up a custom excepthook
This works great, unless you happen to be using the threading
module. In this case, any exceptions in your run
method
(or the function you pass as a target
) will actually be
internally caught by the threading module (see the _bootstrap_inner
method).
Unfortunately this code explicitly dumps the strack trace to stderr, which isn’t so useful.
Now, one approach to dealing with this is to have every run
method, or target
function expicilty catch any exceptions and output them to the log, however it would be nice
to avoid duplicating this handling everywhere.
The solution I came up with was a simple sublcass the standard Thread
class
that catches the exception and places it out on the log.
class LogThread(threading.Thread): """LogThread should always e used in preference to threading.Thread. The interface provided by LogThread is identical to that of threading.Thread, however, if an exception occurs in the thread the error will be logged (using logging.exception) rather than printed to stderr. This is important in daemon style applications where stderr is redirected to /dev/null. """ def __init__(self, **kwargs): super().__init__(**kwargs) self._real_run = self.run self.run = self._wrap_run def _wrap_run(self): try: self._real_run() except: logging.exception('Exception during LogThread.run')
Then, use the LogThread
class where you would previously use the Thread
class.
Another alternative approach to this would be to capture any and all stderr
output and redirect it to the log. An example of this approach can be found
on in electric monk’s blog post "Redirect stdout and stderr to a logger in Python".
One thing that can be a little confusing with Python is how packages work. Packages let you group your modules together and gives you a nice namespace. You can read all about them in the Python docs.
Now one thing that can be pretty confusing is that importing a package does not mean that any modules inside that package are loaded.
Imagine a very simple package called testing
, with a single
foo
module. E.g:
testing/ __init__.py foo.py
The foo
module might look something like:
def bar(): return 'bar'
Now, you might expect to be able to write code such as:
import testing print(testing.foo.bar())
However, trying this won’t work, you end up with an AttributeError
:
Traceback (most recent call last): File "t.py", line 2, intesting.foo.bar() AttributeError: 'module' object has no attribute 'foo'
So, to fix this you need to actually import the module. There are at (at least) two ways you can do this:
import testing.foo from testing import foo
Either of these put testing.foo
into
sys.modules
, and testing.foo.bar()
will work
fine.
But, what if you want to load all the modules in a package? Well, as far as I know there isn't any built-in approach to doing this, so what I’ve come up with is a pretty simple function that, given a package, will load all the modules in the package, and return them as a dictionary keyed by the module name.
def plugin_load(pkg): """Load all the plugin modules in a specific package. A dictionary of modules is returned indexed by the module name. Note: This assumes packages have a single path, and will only find modules with a .py file extension. """ path = pkg.__path__[0] pkg_name = pkg.__name__ module_names = [os.path.splitext(m)[0] for m in os.listdir(path) if os.path.splitext(m)[1] == '.py' and m != '__init__.py'] imported = __import__(pkg_name, fromlist=module_names) return {m: getattr(imported, m) for m in module_names}
There are plenty of caveats to be aware of here. It only works with
modules ending in .py
, which may miss out on some
cases. Also, at this point it doesn’t support packages that span
multiple directories (although that would be relatively simple to
add. Note: code testing on Python 3.2, probably needs
some modification to work on 2.x (in particular I don’t think dictionary
comprehensions in 2.x).
If you’ve got a better way for achieving this, please let me know in the comments.
tl;dr git destroyed my data; my team now has severe trust issues with git
We ask a lot from our source control systems. We want them to be flexible, fast, distributed, clever and even easy-to-use. But the number 1 thing we should demand from a source control system is that it doesn’t destroy our data. Probably most importantly, it shouldn’t ever lose stuff that has been committed, but just behind that it really shouldn’t destroy data in our working directory.
When you find out that your source control system has lost your
data you end up in a very bad place. Once your source control system
destroys your data once, you immediately have a severe break-down of
trust between yourself and your tool. You revert to using cp -R
to create backups before doing anything with the tool, just in case it
destroys your data again.
Development was proceeding along at a cracking pace, and I was
getting ready to commit a series of important changes. Before doing
so, I want to merge in the recent changes from the remote master, so I
do the familiar git pull
. It complained about some files
that would be overwritten by the merge, so I saved a backup of my changes,
then reverted my changes in those specific files, and proceeded.
The merge went went fine, and pulled in all the
remote changes to the working directory. Getting ready for the commit,
I do a git status
and start to get a little concerned;
one of the files I’ve been heavily editting doesn’t show up in the
status list. I cat
the file to see what is going on;
seems none of my changes are there. Now, I’m getting concerned, maybe
I’m going slightly crazy after 3 days straight hacking, but I’m sure I
made some changes to this file. I scroll up the terminal backlog to
the git status
I did before the pull. Sure enough, the
file is marked as changed there, but not after the merge. I carefully
read the full details from the merge; my file isn’t listed being
touched there. Now I am really worried. git has just gone and destroyed
the last 5 or 6 hours worht of work. Not happy!
Luckily, I was able to reconstruct most of the work from editor buffers, which I luckily still had open.
But, now I am worried. Why the fuck did git decide to destroy data in my working directory, without even telling me!. Did I do something wrong? Is this a case I should know about? I had to investigate.
So, I took a snapshot of my repository, rolled back to the revision before the merge, mad some minor modifications to my file, the ran the merge again. And, again, git destroys the change in the working directory. Now this isn’t normal behaviour, something is really fucked. The only thing slightly interesting about the file in question is that it had been recently renamed. Somehow this rename had confused the merge, and the merge was silently overwriting files.
Now git has a few different merge strategies, so I tried out some different ones. This was a simple pretty simple merge with 2-heads so the options were really recursive or resolve. git picks recursive be default, so I tried resolve instead. This worked fine. Surprsingly this made me feel a little better, I wasn’t completely crazy, silently updating files in my working directory wasn’t intended behaviour, there had to be something wrong in recursive merge.
So, I updated to the latest version in homebrew. Same problem.
Then it was time to start debugging git for real. So I downloaded
the source (using git of course). I started having a look through
merge-recursive.c
. It didn’t look too bad, but there was
clearly going to be a lot to learn if I was going to debug this. Before
I started literring the code with prints I thought I better just see if head had the
same problem. Lo and behold head worked! OK, cool, they fixed the bug. But
that isn’t really a satisfying answer. Just for fun I checked out some
random version to try and narrow down when the bug was fixed. In doing
so I found that actually it worked in some old vesions, then didn’t work,
and then finally worked again in the very latest. Here are my raw notes:
1.7.1 => good 1.7.2 => good 1.7.3 => good 1.7.4 => bad 1.7.5 => bad 1.7.6.1 (installed) => bad 1.7.6.1 (checkout) => bad 1.7.6.4 => bad 1.7.7-rc0 => fail 1.7.7-rc1 => pass 1.7.7-rc3 => pass
OK, this is getting more interesting. So somewhere between
1.7.2 and 1.7.3 this bug was introduced. I started using git bisect
to narrow things down. I quickly got bored of manually doing git bisect good
and git bisect bad
, luckily I stumbled upon git bisect run
that
automates the whole process. After about 20 minutes compiling and testing it found the
bad commit.
commit 882fd11aff6f0e8add77e75924678cce875a0eaf Author: Elijah NewrenDate: Mon Sep 20 02:29:03 2010 -0600 merge-recursive: Delay content merging for renames Move the handling of content merging for renames from process_renames() to process_df_entry(). Signed-off-by: Elijah Newren Signed-off-by: Junio C Hamano
OK, lots of talk about merge-recursive and renames. That sounds like it makes sense; at least there is a specific bit of code that I can blame for my data destruction, maybe I don’t have to distrust the whole tool.
But to really be confident, I want to think that the fix isn’t
just something random, and was actually done to fix this this problem. So I
switched the return code in my test script, and ran git bisect
again to find when the bug was fixed. Eventually it found this commit:
commit 5b448b8530308b1f5a7a721cb1bf0ba557b5c78d Author: Elijah NewrenDate: Thu Aug 11 23:20:10 2011 -0600 merge-recursive: When we detect we can skip an update, actually skip it In 882fd11 (merge-recursive: Delay content merging for renames 2010-09-20), there was code that checked for whether we could skip updating a file in the working directory, based on whether the merged version matched the current working copy. Due to the desire to handle directory/file conflicts that were resolvable, that commit deferred content merging by first updating the index with the unmerged entries and then moving the actual merging (along with the skip-the-content-update check) to another function that ran later in the merge process. As part moving the content merging code, a bug was introduced such that although the message about skipping the update would be printed (whenever GIT_MERGE_VERBOSITY was sufficiently high), the file would be unconditionally updated in the working copy anyway. When we detect that the file does not need to be updated in the working copy, update the index appropriately and then return early before updating the working copy. Note that there was a similar change in b2c8c0a (merge-recursive: When we detect we can skip an update, actually skip it 2011-02-28), but it was reverted by 6db4105 (Revert "Merge branch 'en/merge-recursive'" 2011-05-19) since it did not fix both of the relevant types of unnecessary update breakages and, worse, it made use of some band-aids that caused other problems. The reason this change works is due to the changes earlier in this series to (a) record_df_conflict_files instead of just unlinking them early, (b) allowing make_room_for_path() to remove D/F entries, (c) the splitting of update_stages_and_entry() to have its functionality called at different points, and (d) making the pathnames of the files involved in the merge available to merge_content(). Signed-off-by: Elijah Newren Signed-off-by: Junio C Hamano
OK, this is good. Looks like they fixed the bug, and it even references the bad commit that I had narrowed things down to.
So I’m a little bit dismayed that this bug existed for almost a full year before being fixed. I can’t be the only person to have been hit by this problem can I? I looked at the release notes for v1.7.7. This is what they have to say abou the issue:
The recursive merge strategy implementation got a fairly large fix for many corner cases that may rarely happen in real world projects (it has been verified that none of the 16000+ merges in the Linux kernel history back to v2.6.12 is affected with the corner case bugs this update fixes).
OK, so the bug never trigged in 16,000+ Linux kernel merges. Strangely that doesn’t actually make me feel any better.
So, I don’t think git sucks. All software has bugs, but bugs that destroy data are pretty devastating. It is a little hard to trust git merge operations now. I’ll probably try to make sure I don’t merge on to a working directory (i.e: stash my changes first, since then they are at least backed up on the object database).
Of course convincing my colleagues, who were also affected by this bug, and didn’t really have any love for git in the first place, that git isn’t completely broken is going to be a tough sell.
tl;dr
server.listen(, )
. E.g: server.listen(80, '::1')
http://[::1]:80/
OK, my last few posts on node.js may have seemed a little negative. While there are some things in node.js that seem a little more complicated than necessary, there are some things that are nice and simple, such as getting your server to run on both IPv4 and IPv6. This post is a little late for World IPv6 Day, but better late than never!
So this post isn’t about configuring IPv6 on your machine in
general. I’m going to assume that your local network interface has an
IPv6 address. You can probably check this with the output of
ifconfig
. On my Darwin box it looks something like:
benno@ff:~% ifconfig lo0 lo0: flags=8049mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000
You should see the local interface bound to the IPv6 localhost
address ::1
as well the IPv4 localhost address
127.0.0.1
. So lets get started with a simple little
IPv4 server.
var http = require('http') var server function onRequest(req, res) { console.log(req.method, req.url) res.writeHead(200, {'Content-Type': 'text/plain'}) res.end('Hello World\n') } function onListening() { console.log('Listening at http://' + this.address().address + ':' + this.address().port + '/') } server = http.createServer() server.on('request', onRequest) server.on('listening', onListening) server.listen(1337, '127.0.0.1')
This is a slight variation on the canonical node.js Hello World example. A few things worth noting:
on
method rather than the verbose addListener
So, apart from my stylistic quirks, the above should be fairly straight forward. The only new thing functionality wise compared to the normal node.js example is the addition of some trivial logging in the request handler.
So our quest is going to be to add support for IPv6. Before we do that though, I’m going to improve our logging a bit. Just because we are supporting IPv6, doesn’t mean we want to stop our server running on IPv4, so we are going to end up with multiple servers running at once. Once this happens, our logging might get a bit confusing. So we’re going to give our servers a name, and include that in the logging.
var http = require('http') var server function onRequest(req, res) { console.log('[' + this.name + ']', req.method, req.url) res.writeHead(200, {'Content-Type': 'text/plain'}) res.end('Hello World\n') } function onListening() { console.log('[' + this.name + '] Listening at http://' + this.address().address + ':' + this.address().port + '/') } server = http.createServer() server.name = 'ipv4server' server.on('request', onRequest) server.on('listening', onListening) server.listen(1337, '127.0.0.1')
Because Javascript objects are open we can trivially add a
name
field to our objects, and then use this when
logging. In general I avoid messing with objects created by other
modules, but it is the quick and easy approach in this case.
OK, so on to IPv6. As a first stab at it, we get something like this:
var http = require('http') var server function onRequest(req, res) { console.log('[' + this.name + ']', req.method, req.url) res.writeHead(200, {'Content-Type': 'text/plain'}) res.end('Hello World\n') } function onListening() { console.log('[' + this.name + '] Listening at http://' + this.address().address + ':' + this.address().port + '/') } ipv4server = http.createServer() ipv6server = http.createServer() ipv4server.name = 'ipv4server' ipv6server.name = 'ipv6server' ipv4server.on('request', onRequest) ipv6server.on('request', onRequest) ipv4server.on('listening', onListening) ipv6server.on('listening', onListening) ipv4server.listen(1337, '127.0.0.1') ipv6server.listen(1337, '::1')
Basically, creating an IPv6 server is exactly the same as creating
an IPv4 server, except you use an IPv6 address literal (i.e:
::1
) to specify the local address to bind to, rather than
an IPv4 address literal. You can see that there is absolutely no problem
sharing the event handlers between the two different servers. The this
variable in each event handler function refers to the server itself, so you can
handle cases that are server specific if necessary.
When you run this you should get some output like:
[ipv4server] Listening at http://127.0.0.1:1337/ [ipv6server] Listening at http://::1:1337/
Which looks pretty good. You can try going to the IPv4 server URL in your browser. If you
try the IPv6 URL, you will probably run in to some problems. This is because you need some
escaping of the IPv6 literal address in the URL, or it can’t be parsed correctly (what with there
being all those colons which are usually used for separating the port number). So the correct
URL should be: http://[::1]:1337/
. We better fix
this bug in the code:
function onListening() { var hostname = this.type === 'tcp4' ? this.address().address : '[' + this.address().address + ']' console.log('[' + this.name + '] Listening at http://' + hostname + ':' + this.address().port + '/') }
OK, that’s looking pretty good now, if you start hitting those URLs on the different address you should get some useful output such as:
[ipv4server] Listening at http://127.0.0.1:1337/ [ipv6server] Listening at http://[::1]:1337/ [ipv4server] GET / [ipv4server] GET /favicon.ico [ipv6server] GET / [ipv6server] GET /favicon.ico
Now, I mentioned earlier I don’t like duplicating data. I also don’t like duplicating code either, so let’s refactor this a little:
function startServer(name, address, port) { var server = http.createServer() server.name = name server.on('request', onRequest) server.on('listening', onListening) server.listen(port, address) return server } startServer('ipv4server', '127.0.0.1', 1337) startServer('ipv6server', '::1', 1337)
So, in conclusion it is easy-peasy to run your web application on IPv6, and even on IPv4 and IPv6 using the exact same script.
tl;dr
Last time I wrote about some of the idiosyncrasies in the way in which you deal with exceptions in node.js. This time, I’m looking at a phenomenon I’m calling semi-asynchronous functions.
Let’s start with a simple asynchronous function. We have a function
x
which sets the value of two global variables. Of course
global variables are bad, so you could imagine that x
is
a method and it is updating some fields on the current object if it makes
you feel better. Of course some will argue that any mutable state is bad, but
now we are getting side-tracked!
var a = 0 var b = 0 function x(new_a, new_b) { a = new_a b = new_b }
So, here was have a pretty simple function, and it is pretty easy to state the
post-condition that we expect, specifically that when x
returns
a
will have the value of the first argument and b
will
have the value of the second argument.
So, let’s just write some code to quickly test our expectations:
x(5, 6) console.log(a, b)
As expected this will print 5 6
to the console.
Now, if x
is changed to be an asynchronous function things
get a little bit more interesting. We’ll make x
asynchronous
by doing the work on the next tick:
function x(new_a, new_b, callback) { function doIt() { a = new_a b = new_b callback() } process.nextTick(doIt) }
Now, we can guarantee something about the values of a and b when the callback is executed, but what about immediately after calling? Well, with this particular implementation, we can guarantee that a and b will be unchanged.
function done() { console.log("Done", a, b) } x(5, 6, done) console.log("Called", a, b)
Running this we see that our expectations hold. a and b are 0 after
x
is called, but are 5 and 6 by the time the callback
is executed.
Of course, another valid implementation of x
could really
mess up some of these assumptions. We could instead implement it like so:
function x(new_a, new_b, callback) { a = new_a function doIt() { b = new_b callback() } process.nextTick(doIt) }
Now we get quite a different result. After x
is called
a has been modified, but b remains unchanged. This is what I call a
semi-asynchronous asynchronous function; part of the work is
done synchronously, while the remainder happens some time later.
Just in case you are thinking at this point that this is slightly academic, there are real functions in the node.js library that are implemented in this semi-asynchronous fashion.
Now as a caller, faced with this semi-asynchronous functions, how exactly should you use it? If it is clearly documented which parts happen asynchronously and which parts happen synchronously and that is part of the interface, then it is relatively simple, however most functions are not documented this way, so we can only make assumptions.
If we are conservative, then we really need to assume that anything modified by the function must be in an undefined state until the callback is executed. Hopefully the documentation makes it clear what is being mutated so we don’t have to assume the state of the entire program is undefined.
Put another way, after calling x
we should not rely on
the values a and b in anyway, and the implementer of x
should feel free to change when in the program flow a and/or b is
updated.
So can we rely on anything? Well, it might be nice to rely on the
order in which some code is executed. With both the implementation
of x
so far, we have been able to guarantee that the
code immediately following the function executes before the asynchronous
callback executes. Well, that would be nice, but what if x
is implemented like so:
function x(new_a, new_b, callback) { a = new_a b = new_b callback() }
In this case, the callback will be executed before the
code following the call to x
. So, there are two questions
to think about. Is the current formulation of x
a valid
approach? And secondly, is it valid to rely on the code ordering?
While you think about that, let me introduce another interesting
issue. Let’s say we want to execute x
many times in
series (i.e: don’t start the next x
operation until the
previous one has finished, i.e: it has executed the callback.). Well,
of course, you can’t just use something as simple as a for loop that would
be far too easy, and it would be difficult to prove how cool you are at
programming if you could just use a for loop. No instead, you need to do something like this:
var length = 100000; function repeater(i) { if( i < length ) { x(i, i, function(){ repeater(i + 1) }) } } repeater(0)
This appears to be the most widely used approach. Well there
is at least one blog post
about this technique, and it has been tied up into a nice library. Now, this
works great with our original implementations of x
. But try it with the latest
one (i.e: the one that does the callback immediately). What happens? Stack overflow
happens:
node.js:134 throw e; // process.nextTick error, or 'error' event on first tick ^ RangeError: Maximum call stack size exceeded
So now the question isn’t just about whether the code ordering is a reasonable assumption to make, now we need to work out whether it is a reasonable assumption to make that the callback gets a new stack each time it is called! Once again, if it is clearly documented it isn’t that much of a problem, but none of the standard library functions document whether they create a new stack or not.
The problem here is that common usage is conflicting. There is a lot of advice and existing libraries that make the assumption that a callback implies a new stack. At the same time there is existing code within the standard library that does not create a new stack each time. To make matters worse, this is not always consistent either, it can often depend on the actual arguments passed to the function as to whether a new stack is created, or the callback is executed on the existing stack!
What then can we make of this mess? Well, once again, as a caller you need to make sure you understand when the state is going to be mutated by the function, and also exactly when, and on which stack your callback will be executed.
As an API provider as always, you need to document this stuff, but lets try to stick to some common ground; callback should always be executed in a new stack, not on the existing one.
tl;dr
One of the best things about asynchronous, callback based programming is that basically all those regular flow control constructs you are used to are completely broken. However, the one I find most broken is the handling of exceptions.
Javascript provides a fairly familiar try...catch
construct for dealing with exceptions. The problems with exceptions is
that they provide a great way of short-cutting errors up a call stack,
but end up being completely useless of the error happens on a different
stack.
Here is a simple example to get started:
function x() { throw new Error('my silly error') } x()
If we run this in node, the result is fairly intuitive, we get a nasty traceback:
node.js:134 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: my silly error at x (/Users/benno/apkudo/test.js:2:11) at Object.(/Users/benno/apkudo/test.js:5:1) at Module._compile (module.js:402:26) at Object..js (module.js:408:10) at Module.load (module.js:334:31) at Function._load (module.js:293:12) at Array. (module.js:421:10) at EventEmitter._tickCallback (node.js:126:26)
Now if we ignore the first few lines of junk, the rest is a fairly familiar traceback.
You’ll note that we are already pretty deep in a stack trace even for this very simple
function. You can mostly ignore everything from Module._compile
onwards.
Now instead of doing this we might want to, instead, catch this error and write some code to handle the error. We aren’t going to do anything earth shattering in the handler, just print out the exception and continue on our merry way.
function x() { throw new Error('my silly error') } try { x() } catch (err) { console.log("Error:", err) }
Now, if you run this you get:
Error: { stack: [Getter/Setter], arguments: undefined, type: undefined, message: 'my silly error' }
So far, so good. Just what you would expect in the normal world of programming. Let’s spice
things up a bit; let’s make x
asynchronous. We’ll create a wrapper function y
which will take two arguments. The first argument indicates whether to execute x
synchronously
or asynchronously. The second argument is a function that will be called on completion. Something like
this:
function y(arg, callback) { if (arg === 1) { x() callback() } else { function onTick() { x() callback() } process.nextTick(onTick()) } }
Now this setup may seem a tad contrived, but in the real world we
get situations not all that different to this. For example the
built-in listen
method may do a DNS lookup on the
host
argument if it is not a dotted decimal. If it is
a dotted decimal though, no lookup is required. So, we change
our calling code appropriately:
try { y(1, function () { console.log("Callback") }) } catch (err) { console.log("Error:", err) }
Running this gets us essentially the same output as before: we successfully catch the exception and then we are on our way. Let’s change our calling code slightly though, so that we hit the asynchronous path:
try { y(0, function () { console.log("Callback") }) } catch (err) { console.log("Error:", err) }
Running this we now find that we get an ugly traceback. We completely failed in catching the exception:
node.js:134 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: my silly error at x (/Users/benno/apkudo/test.js:2:11) at Array.onTick (/Users/benno/apkudo/test.js:11:6) at EventEmitter._tickCallback (node.js:126:26)
What happened here is that when y
hits the
asynchronous path it creates an entirely new call stack, on that isn’t
protected by a try..catch
block at the top of the call
stack. So we end up with the default node exception handling code. You
can see how the call stack in this case is much shorter.
How can we deal with this? Well, one way is that we just don’t do
exception like things, and always explicitly return errors or pass them
as arguments to callbacks. The other option is to use the event system
provided by node.js. That is what we will look at next as it is what node.js
uses internally. We are going to change our code so that y
emits
a myerror
event rather than the exception bubbling up.
var events = require('events') emitter = new events.EventEmitter() function y(arg, callback) { if (arg === 1) { x() callback() } else { function onTick() { try { x() } catch(err) { emitter.emit('myerror', err) return } callback() } process.nextTick(onTick) } }
In this example we are just using a global emitter object. In a
real example x
would likely be a method on an object that
sub-classed the EventEmitter
class. If we run the code
now don’t get any output at all! This is because we haven’t yet
attached a listener to the myerror
event. We can
do that like so:
emitter.on('myerror', function(err) { console.log("Error:", err) }) y(0, function () { console.log("Callback") })
Now, when we run it we get the same type of output as we did when we were catching exceptions:
Error: { stack: [Getter/Setter], arguments: undefined, type: undefined, message: 'my silly error' }
Now, you might have picked up a slight problem with the above approach. If
we don’t catch the exception by registering a handler for the myerror
event nothing happens; the exception is essentially ignored. This is different to
normal exceptions in Javascript that will bubble right up to the run-time for reporting.
Now, we could ensure that there is always a default handler for the
myerror
event which dumped the traceback and exits, but
you would also need to work out if another listener already handled
the error or not, and so on. It turns out that node.js has already
solved this problem, so instead of inventing our own event name we can
use the special error
event. Events called
error
are treated very specially by the node code. From
the emit
code in event.js
:
if (type === 'error') { if (!this._events || !this._events.error || (isArray(this._events.error) && !this._events.error.length)) { if (arguments[1] instanceof Error) { throw arguments[1]; // Unhandled 'error' event } else { throw new Error("Uncaught, unspecified 'error' event."); } return false; } }
Of course, registering an error
event listener doesn’t magically
trap any exceptions that might be raised in the old fashioned way. This means
if you are writing code, you really need to understand which type of error handling
approach the code you are calling uses and ensure you handle it appropriatley.
Unfortunately, most APIs don’t actually document this error handling behaviour, so you are forced to go and read the code to work out exactly what is going on. If you are writing an API, please make it easy on your fellow developer by documenting this kind of thing.