Determining Dependencies

Tue, 15 Mar 2011 18:58:48 +0000
tech build-system

One of the best ways to make a build system fast is to avoid the unnecessary rebuilding of files. Build tools have a variety ways of achieving this. To better discuss this, let’s first define some terms. One way to look at a build system is that it takes some set of inputs in to some set of outputs. Usually these inputs and outputs are files in the file-system, but could potentially be something else, like tables in a database or anything else. Usually the build system consists of a set of build rules; each build rule has some set of inputs, and produces a set of outputs, by running a given build command. You’ll have to forgive the abstruse nature of these definitions, but I’m attempting to keep the design space as open as possible!

So to improve the speed of the build system we want to avoid executing unnecessary build commands. To do this in any reasonable way requires making some assumptions about the individual build rules, specifically that the output for a build rule only depends on the build rule’s inputs. With such an assumption there is no need to rerun a build command if the inputs of a build rule have not changed.

This in itself is an interesting restriction as the inputs to a build rule may not be entirely obvious. For example, the output of a build rule may depend on the time, or the user name of the hostname. The other problem is build commands that have inputs which are also outputs (i.e: commands that modify files such as ar). And of course the given command for a build rule may also potentially change. For example, imagine a build system that support a --ndebug argument, which causes compile command to have an extra -DNDEBUG argument.

So the aim of this article is to explore the design space of how build tools handle the specification of the inputs to a given build rule.

Explicit

Now the easiest approach is that the build system explicitly lists the inputs for each build rule. This is the base line kind of approach for something like make. The difficulty with this approach is that it can be error prone. A prototypical extract from a Makefile might look something like:


foo.o: foo.c
     gcc foo.c -o foo.o

Now if foo.c includes foo.h then there is a problem. Since foo.h is not captured as one of the inputs to the build rule, if foo.h changes, then the build command will not be re-run.

Of course, it is quite simple to include foo.h as one of the inputs for this specific build rule, but that is pretty brittle. C already sucks enough having to both declare and define public functions, without making it even more annoying by requiring you to update the build system every time you add a header to a source file. (And of course it should be removed from the build system when it is removed from the C file, however forgetting to do this will just affect performance, not correctness of the build system).

Another possibility is to treat each of the include path directories as an input to the build rule. There are some other issues with determining if a directory has changed from one build to the next, but we’ll ignore that for now. This approach should be relatively easy to use, and should be correct most of the time, but has a performance drawback if the include path has many include files, and most source files only include one or two headers.

Rule Specific Scanners

Rather than having to manually define each and every input file another approach is to have some rule specific process that can determine the correct inputs for a given rule. gcc has a -MM option which can be used to determine which header files would be used to compile a given file. This can be used in conjuction with make to automatically determine the inputs for any compile rules, which is a great improvement over manually managing the dependencies for any given source file, however there are some drawbacks.

The first is that the overall compile time is affected, as each time a file is compiled it must also generate the dependencies. In practise this isn't too bad; scanning for headers isn’t particularly CPU intensive, and since the compile will touch the same files doesn’t result in any extra I/O (and if the files weren’t cached, it primes the cache for the compile anyway).

The second problem, is that gcc -M is a gcc specific thing that isn’t going to work with other compilers, or other tools more generally. Of course, it would be possible to write a generate dependencies script for each type of build rule, but this is potentially a lot of work, and can have accuracy problem as the actual build command may in fact work slightly differently to how dependency script works, which risks having incorrect builds.

The next problem is to do with generated header files. If a source file includes a generated header file, and that generated header file does not yet exist, then gcc -MM will result in an error. gcc provides a -MG option to help account for this problem, however it is far from perfect. It assumes that the include path of the generated header is the current working directory which may not actually be the case. Generated files are not necessarily a problem, depending on some other design decisions it is possible to ensure that this dependency scanning occurs at the same time as compilation, so missing includes would be an error.

Another way to avoid the generated header file problem if the scanning operation is aware of the rest of the build rules. For example, when searching for a specific include file, the scanning tool could check not just for specific files in the file system itself, but also check for known outputs from other build rules in the build system. This approach has the drawback that the accuracy of scanning might not be adequate. For example, the SCons build tool uses this approach but can generate incorrect set of inputs when include files are conditionally included, or when headers are included through a macro. E.g:


#define ARCH mips
#define ARCH_INC(x) 
#include ARCH_INC(foo.h)

Of course you can argue that such a construct is probably less than ideal, however any approach like this is going to be prone to the same class of errors.

Depending on the exact model for determining the execution order of build commands in the system (the subject of a later article) the time at which the scanning occurs can have a major impact on performance.

A final problem with this approach in practise is that it can actually miss some dependencies. Consider the a command such as gcc -Iinc1 -Iinc2 foo.c. If foo.c includes foo.h and foo.h resides in the inc2 directory then this approach will generally report that foo.c depends on inc2/foo.h. However, this misses an important bit of information; the command is dependent on the non-existence of foo.h in the inc1 directory. If foo.h in added to inc1 then the output of the command will be different but an incremental build would miss this and not cause a rebuild. In theory there should be no reason why such a tool can’t report that a build rule depends on the non-existence of files as well. And indeed why explicit noting of inputs can’t do the same thing.

Tracing command execution

The only other approach (that I know of) is to track what files the build command actually touches when it executes. There are a couple of ways in which this can be done, but all approaches conceptually track when the build command opens files. One example of this is the memoize.py build tool, which uses strace to trace which files are touched by a given build command.

This approach has the very large advantage of being pretty accurate and capturing all the files that a build command touches and of not needing any build rule specific logic to determine the input files.

This approach can also easily capture files that were attempted to be opened

There are of course a few disadvantages. Firstly there is no standard API for tracing, so this part of any build tool ends up being OS specific, which is not ideal. Also tracing can add some significant overhead to the execution of build commands. Benchmarks are required to see what the difference in performance is between simply tracing build commands and running some rule specific scanner / dependency generation.

A potential disadvantage of this approach is false dependencies. If a build command opens a file but doesn’t actually consider the contents of the file during the execution of the command it will still be marked as a dependency, although there would be no need to rebuild if that file changed. This is is not a correctness problem, but could cause excessive unnecessary rebuilds.

Probably the biggest disadvantage of this approach is that all the input files for a build rule are not known until after the build command has executed. This has some pretty significant impacts on the different approaches that can be used for choosing the build rules execution order.

Summary

There are different approaches that can used to determine the set of input files for a given build rule; each has pros and cons, there is no clear winner.

My preference is the automatic detection of inputs for and build rule using a tracing approach of some kind. It wins in terms of correctness and also ease-of-use. Performance is a little bit unknown, however it should approach the performance of using a secondary script for determining the inputs.

Disagree with my analysis? Can you suggest some alternatives? Please leave a comment.

Build System Requirements

Sun, 13 Mar 2011 14:46:30 +0000
tech build-system

On just about any project that is larger than hello world you are going to want some kind of tool that can automate all the steps required to build a program. In general we give the name build sytem to the various compenents that go in to generating the final program. Build systems are probably up there with revision control systems as one of the most important software engineering tools. In this first post of a series about build systems I'm going to try and put together some of the requirements that a project’s build system should have.

So what is a build system? Basically, something that takes in the project source and generates the project artifacts (generally programs, libraries, tarballs, etc).

Before getting too far into the discussion is useful to draw the distinction between a project’s build system and the underlying build tool (e.g: make, scons, etc). This post is primarily about the former, not the latter. Of course the choice of tool can make it easier or harder to achieve the goals of the project’s build system, but it is possible to build good or bad systems regardless of the underlying tool.

Completeness

The build system should make it easy to build all the project’s outputs right up to something in releasable format. See also The Joel Test #2. This is not always easy to achieve, especially if some of your build process includes tools that are not easily scriptable.

Ideally this should include all the artifacts that are related to the release, including user manuals, release notes, test reports and so forth. Enabling the build system to automate the generation of all these artifacts requires some pretty careful thinking about the entire software development process.

For example, release notes may be as simple as pulling all the commit messages from yoru revision control system. Of course, depending on your choice of revision control system this may more or less difficult. Ofcourse there are plently of reasons why simple pulling commit messages it not the best approach to release notes; a higher level feature oriented change list is often more appropriate.

Another area is how testing will work. Clearly such an approach pushes for automated testing procedures, but this is not always so straight forward, especially for embedded software where the final test requires some form of manual testing.

Another point of consideration is build variants. Although most developers will only be building and testing a certain configuration or target at once, it should be possible to build all the supported variants of the project in one go.

Correctness

The output of any invocation of the build sytem should be correct. This seems obvious, but can actually be very difficult to get right. The main place that this is relevant is when thinking about incremental builds. Incremental builds are a pretty common optimisation that attempts to avoid doing unnecessary work to regenerate the outputs when the inputs haven’t changed. In such case, the result of doing an incremental build should be the same as doing a full build.

The usual way this can go wrong is that the dependencies for an output command are incorrecty specified or determined; this can lead to a file changing (such as a header), and output file not being rebuilt.

Performance

The build system should be fast! Building the entire project itself should be fast, and incremental builds should also fast. As a programmer you really want to get the edit-compile-test cycle to be fast; it doesn’t need to be too long before you lose concentration and just quickly check twitter or hackernews.

Incremental builds are normally the main thing required for performance, but an oft overlooked aspect is partial build where only some subset of the outputs are required.

XKCD: The #1 programmer excuse for legitimately slacking off: “My code’s compiling”

Easy to use

To be effective the build system should be easy to use for all the developers. Of course there are a lot of aspects of ease-of-use. Firstly, it should be simple to start and execute a build; this part is usually pretty easy to achieve. It should also be easy to understand what the different targets are, and build for a specific target; this can be slightly harder to get right.

Ideally it would be easy and straight-forward for developers to change the build, for exmaple adding or removing targets, changing the build line, and so forth. This is the bit that many build systems fail at. It’s often very difficult to work out where a particular target is defined, and where the various different flags are defined; especially when flags are often the accumulation of options set in various different places.

Summary

So, this has been a pretty high-level overview of the requirements of a build system. In future posts, I'm going to look at the design of various build tools, and how the design of these tools can help achieve the goals outlined in this post.

If you have other requirements on a build system, it would be great if you could leave a comment or send me an e-mail.

Python getCaller

Thu, 06 Jan 2011 12:55:06 +0000
python tech

I’ve been doing a bit of Python programming of late, and thought I’d share a simple trick that I’ve found quite useful. When working with a large code-base it can sometimes be quite difficult to understand the system’s call-flow, which can make life trickier than necessary when refactoring or debugging.

A handy tool for this situation is to print out where a certain function is called from, Python makes this quite simple to do. Python’s inspect module is very powerful way of determining the current state of the Python interpreter. The stack function provides a mechanism to view the stack.

import inspect

print inspect.stack()

inspect.stack gives you a list of frame records. A frame record is a 6-tuple that, among other things, contains the filename and line number of the caller location. So, in your code you can do something like:

import inspect
_, filename, linenumber, _, _, _ = inspect.stack()[1]
print "Called from: %s:%d" % (filename, linenumber)

The list-index used is 1, which refers to the caller’s frame records. Index 0 returns the current function’s frame record.

Now, while this is just a simple little bit of code, it is nice to package it into something more reusable, so we can create a function:

def getCallsite():
    """Return a string representing where the function was called from in the form 'filename:linenumber'"""
    _, filename, linenumber, _, _, _ = inspect.stack()[2]
    return "%s:%d" % (filename, linenumber)

The tricky thing here is to realise that it is necessary to use list index 2 rather than 1.

The ability to inspect the stack provides the opportunity to do some truly awful things (like making the return value dependent on the caller), but that doesn’t mean it can’t be used for good as well.

Building an AVR toolchain on OS X

Fri, 09 Jul 2010 19:49:17 +0000
tech avr arduino gcc

I’ve recently got myself a couple of Arduino Duemilanove boards from Little Bird Electronics. Now the Arduino is a pretty cool bit of kit, and it comes with a great integrated development environment, which makes writing simple little programs an snap. Of course, I’m too much of an old curmudgeon to want to use and IDE and some fancy-schmancy new language, I want make, I want emacs, I want gcc. So I embarked on the task of building a cross-compiler for the AVR board, that will work on OS X.

Photo of my Arduino board

Now, my ideal solution for this would have been to create a package for Homebrew, my current favourite package manager for OS X. Unfortunately, that just isn’t going to happen right now. GCC toolchains pretty much insist in putting things in $prefix/$target. So, in a standard homebrew install, that would mean I need a directory /usr/local/avr, unfortunately, Homebrew insists your package only dump things in etc, bin, sbin, include, share or lib. Now, after wasting a bunch of time first fighting GCC’s autogoat build system to try and convince it to conform to Homebrew’s idea of a filesystem hierarchy, and then a whole other bunch of time trying to learn Ruby and convince Homebrew that other directories should be allowed, I took the corwads way out and decided /opt/toolchains is a perfectly respectable place for my cross-compilers to live.

And so, the cross compiler dance begins. We start with binutils, and as the code dictates, we start by trying with the latest version, which at time of writing is 2.20.1.

Of course, building a toolchain for a non-x86 based architecture wouldn’t be the same if you didn’t need to patch the source. The patches for the avr are mostly minimal; adding some extra device definitions, for devices you probably don’t have anyway. If you miss this step, expect some pain when you try to compile avr-libc. I got my patches from the FreeBSD AVR binutils patchset. I try to avoid patches I don’t need so have only applied the patch-newdevices patch. If things break for you, you might want to try the other patches as well.

  1. $ mkdir avr-toolchain-build
  2. $ cd avr-toolchain-build
  3. $ wget http://ftp.gnu.org/gnu/binutils/binutils-2.20.1.tar.bz2
  4. $ wget "http://www.freebsd.org/cgi/cvsweb.cgi/~checkout~/ports/devel/avr-binutils/files/patch-newdevices?rev=1.16;content-type=text%2Fplain" -O patch-newdevices
  5. $ tar jxf binutils-2.20.1.tar.bz2
  6. $ cd binutils-2.20.1
  7. $ patch -p0 < ../patch-newdevices
  8. $ cd ..
  9. $ mkdir binutils-build
  10. $ cd binutils-build
  11. $ ../binutils-2.20.1/configure --target=avr --prefix=/opt/toolchains/ --disable-werror
  12. $ make -j2
  13. $ make install
  14. $ cd ../

Fairly simple, the only gotcha is the --disable-werror, seems that binutils added the -Werror flag to the build, good move!, unfortunately, seems that the code isn’t warning free, so things barf (bad move!), but it comes with a free flag to disable the error... that’s good.

So, now we move on to GCC. Now, since I’m really only interested in compiling C code, (remember the bit about being a curmudgeon, none of this fancy C++ stuff for more, no siree!), we can just grab GCC’s core package.

Now GCC has some dependencies these days, so you need to get gmp, mpc and mpfr installed somehow. I suggest using Homwbrew. In fact it was this step with my old Mac Ports setup that forced me to switch to Homebrew; too many weird library conflicts with iconv between the OS X version and the ports version; but hey, you mileage may vary! (Note: make sure you have the latest version of Homebrew that includes my fix for building mpfr).

And of course, just like binutils, you need some patches as well. Unfortunately things here aren’t as easy. There doesn’t appear to be any semi-official patch for new devices for gcc 4.5.0! So, based on this patch, I managed to hack something together. The formats have changed a little, so I’m not 100% confident that it works, if you are actually trying to use one of the new devices in this patch, be a little skeptical, and double-check my patch. If you are using an existing supported AVR like the atmega328p, then the patch should work fine (well, it works for me).

  1. $ brew install gmp libmpc mpfr
  2. $ wget http://ftp.gnu.org/gnu/gcc/gcc-4.5.0/gcc-core-4.5.0.tar.bz2
  3. $ wget http://benno.id.au/drop/patch-gcc-4.5.0-avr-new-devices
  4. $ tar jxf gcc-core-4.5.0.tar.bz2
  5. $ cd gcc-core-4.5.0
  6. $ patch -p1 < ../patch-gcc-4.5.0-avr-new-devices
  7. $ mkdir gcc-build
  8. $ cd gcc-build
  9. $ ../gcc-4.5.0/configure --target=avr --prefix=/opt/toolchains
  10. $ make -j2
  11. $ make install
  12. $ cd ..

So, the final piece to complete the toolchain is getting a C library. You almost certainly want AVR Libc, or you can be really hard-core and go without a C library at all, your call.

  1. $ wget http://mirror.veriportal.com/savannah/avr-libc/avr-libc-1.7.0.tar.bz2
  2. $ tar jxf avr-libc-1.7.0.tar.bz2
  3. $ mkdir avr-libc-build
  4. $ cd avr-libc-build
  5. $ ../avr-libc-1.7.0/configure --prefix=/opt/toolchains --host=avr --build=
  6. $ make
  7. $ make install

This should work with out any problems, however, if you messed up the new devices patches when building GCC and binutils, you might get errors at this point.

So, now we have a working GCC toolchain for the AVR we can start programming. Of course, you really want to have a good reason to do things this way, otherwise I really recommend just using the Arduino development environment..

Next time I’ll be looking at getting some example C programs running on the Arduino.

References

Android Emulator Internals — Bus Scanning

Tue, 09 Feb 2010 15:14:27 +0000
tech android

Wow, I can’t believe it has been over two years since I last wrote about Android’s for of the QEMU emulator. Turns out there have been some changes since I last looked at it.

The most important is that the Android emulator no longer has a fixed layout of devices in the physical memory address space. So, while it may have previously been the case that the event device was always at 0xff007000, now it might be at 0xff008000, or 0xff009000, depending on what other devices have been configured for a particular device configuration.

Now, if a device may exist at some random physical address, how does the OS know how to setup the devices drivers? Well, as I’m sure you’ve guessed, the addresses and really random, they are located at page-offset addresses through a restricted range of memory. OK, so how does the OS know what the range is? Well, there is the goldfish_device_bus device.

Basically, this device provides a mechanism to enumerate the devices on the bus. The driver writes PDEV_BUS_OP_INIT to the PDEV_BUS_OP register, the goldfish_device_bus then raises an interrupt. The driver the reads the PDEV_BUS_OP register. Each time the value is PDEV_BUS_OP_ADD_DEV, the driver can read the other registers such as PDEV_BUS_IO_BASE, PDEV_BUS_IO_SIZE, PDEV_BUS_IRQ, to determine the properties of the new device. It continues doing this until it reads a PDEV_BUS_OP_DONE, which indicates the bus scan has finished.

The driver can determine what type of device it has found by writing a pointer to the PDEV_BUS_GET_NAME register. When this happens the device writes an the device’s name (as an ASCII string) to the pointer.

Linux uses these strings to perform device to driver matching as described in the Platform Devices and Drivers document.

Weird things about JavaScript

Mon, 04 Jan 2010 21:43:58 +0000
tech javascript

JavaScript keeps throwing up interesting new tidbits for me. One that kind of freaked me out the other day is when functions and variables are created in a given block of code.

My expectation was that in a block of code, a sequence of statements, the statements would be executed in order, one after another, just like in most imperative languages. So for example in Python I can do something like:

 def foo(): pass
 print foo

and expect output such as:

 <function foo at 0x100407b18>

However, I would not expect the same output if I wrote the code something like this:

 print foo
 def foo(): pass

and in fact, I don’t. I get something close to:

  NameError: name 'foo' is not defined

Now in JavaScript, by contrast, I can do something like:

 function foo() { }
 console.log(foo);

and the console will dutifully have foo () printed in the log. However, and here comes the fun part, I can also do this:

 console.log(foo);
 function foo() { }

which blows my mind. I guess this really comes in handy when... nope, can’t think of any good reason why this is a useful feature. (I’m sure there must be a good reason, but damned if I can work it out. But this is only where the fun begins. Because the same thing works for variables!

Now usually in Javascript, if you have a function, and write code like:

 function foo() { 
     x = 37;
 }
 foo();
 console.log("x:", x);

You find out you’ve stuffed up, and accidently written to the global object because for some brain-dead reason when you assign to a variable that doesn’t exist in JavaScript it will merrily walk up the scope chain to the global object and plonk it in there. If you do something like:

 function foo() { 
     var x;
     x = 37;
 }
 foo();
 console.log("x:", x);

x will be undefined, and the x = 37 line will update the function’s locally scoped variable, and not mess with the global object. But now comes the part that screws with your head. You can just as easily write this as:

 function foo() { 
     x = 37;
     var x;
 }
 foo();
 console.log("x:", x);

and it will have exactly the same effect. Now it is fairly clear what is happening here; as a function is parsed any variables and functions are created at that time. It turns out though that although variables are created, they are not initialised, so code such as:

 var x = 12;
 function foo() { 
     console.log("x:", x);
     var x = 37;
 }
 foo();

will output x as undefined, (not 37, or 12). Now this behaviour isn’t wrong, or necessarily bad, but it was certainly counter to my expectation and experience in other languages.

Monkey Patching Javascript

Fri, 01 Jan 2010 22:54:22 +0000
tech javascript html5 software

Javascript is a very permissive language; you can go messing around of the innards of other classes to your heart’s content. Of course the question is should you?.

Currently I’m playing around with the channel messaging feature in HTML5. In a nutshell, this API lets you create communication MessageChannel, which have two MessagePort object associated with it. When you send a message on port1 an event is triggered on port2 (and vice-versa). You send a message by calling the postMessage method on a port object. E.g:

  port.postMessage("Hello world");

This opens up a range of interesting possibilities for web developers, but this blog post is about software design, not cool HTML5 features.

Unfortunately the postMessage is very new, and the implementation has not yet caught up with the specification. Although you are meant to be able to transfer objects using postMessage currently only strings are supported, and any other objects are coerced into strings. This has an interesting side-effect. If we have code such as:

  port.postMessage({"op" : "test"});

the receiver of the message ends up with the string [Object object], which is mostly useless; actually it is completely useless. So, we want to transfer structured data over a pipe that just supports standard strings, sounds like a job for JSON. So, now my code ends up looking like:

  port.postMessage(JSON.stringify({"op" : "test"});

Now this is all good, but it gets a bit tedious having to type that out every time I want to post a message, so a naïve approach we can simple create a new function, postObject:

 function postObject(port, object) {
   return port.postMessage(JSON.stringify(object));
 }

 postObject(port, {"op" : "test"});

OK, so this works, and it is pretty simple to use, but there aesthetic here that makes this grate just a bit, why do I have to do postObject(port, object), why can’t I do port.postObject(object), that is more “object-oriented”, so, thankfully JavaScript lets us monkey patch objects at run time. So, if we do this:

 port.postObject = function (object) {
   return this.postMessage(JSON.stringify(object));
 }
 
 port.postObject({"op" : "test"});

OK, so far so good. What problems does this have? Well, firstly we are creating a new function for each port object, which isn’t great for either execution time, or memory usage if we have a large number of ports. So instead we could do:

  function postObject(object) {
    return this.postMessage(JSON.stringify(object));
  }
  port.postObject = postObject;

  port.postObject({"op" : "test"});

This works well, except we have two problems. The postObject function ends up as a global function, and if you called it as a global function, the this parameter would be the global, window object, rather than a port object, which would be an easy mistake to make, and difficult to debug. Additionally, we end up with additional per-object data for storing the pointer. Thankfully javascript has a powerful mechanism for solving both the problems: the prototype object (not to be confused with the javascript framework of the same name).

So, if we update the prototype object, instead of object directly, we don’t need to add the function to the global namespace, we avoid per-object memory usage, and we avoid extra code having to remember to set it for every port object:

 MessagePort.prototype.postObject = function postObject(object) {
   return this.postMessage(JSON.stringify(object));
 } 

 port.postObject({"op" : "test"});

Now, this ends up working pretty well. The real question is should be do this? Or rather which of these options should we choose? There is an argument to be made that monkey-patching anything is just plain wrong, and it should be avoided. For example rather than extending the base MessagePort class, we could create a sub-class (Exactly how you create sub-classes in JavaScript is another matter!).

Unfortunately sub-classing doesn’t get us to far, as we are not the ones directly creating the MessagePort instance, the MessageChannel construct does this for us. (I guess we could monkey-patch MessageChannel, but that defeats the purpose of avoiding monkey-patching!).

Of course, another option would be to create a class that encapsulates a port, taking one as a constructor. E.g:

 function MyPort(port) {
   this.port = port;
 }

 MyPort.prototype.postObject = function postObject(object) {
   this.port.postMessage(JSON.stringify(object));
 }

 port = MyPort(port);
 port.postObject({"op" : "test"});

Of course, this means we have to remember to wrap all our port objects in this MyPort class. This is kind of messy (in my opinion). Also we can no longer call the standard port methods. Of course, we could create wrappers for all these methods too, but then things are getting quite verbose, and we are stuffed if it comes to inspecting properties.

Unlike some other object-oriented languages, Javascript provides another alternative, we could change the class (i.e: prototype) of the object at runtime. E.g:

 function MyPort(port) { }
 MyPort.deriveFrom(MessagePort); /* Assume this is how we create sub-classes */

 MyPort.prototype.postObject = function (object) {
    this.portMessage(JSON.stringify(object));
 }

 port.__proto__ = MyPort.prototype; /* Change class at runtime */
 port.postObject({"op" : "test"});

This solves most of the problems of the encapsulted approach, but we still have to remember to adapt the object, and aditionally, __proto__ is a non-standard extension.

OK, so after quickly looking at the sub-classing approaches I think it is fair to discount them. We are still left with trying to determine if any of the monkey-patching approaches is better than a simple function call.

So, there is mostly a consesus out there that monkey patching the base object is verboten, but what about other objects?

Well, if you are in you own code, I think it is a case of anything goes, but what if we are providing reusable code modules for other people? (Of course, even in your own code, there might be libraries that you include that are affected by your overloading). When base objects start working in weird and wonderful ways just because you import a module debugging becomes quite painful. So I think changing the underlying implementation (like the example does when monkey-patching the postMessage method) should probably be avoided.

OK, so now the choices are down to function vs. add a method to the built-in class’s prototype. So if we just add a new global function we could be conflicting with any libraries that also name a global function in the same way. If we add a method to the prototype, at least we are limiting of messing with the namespace to just the MessagePort object; but really, both the options aren’t ideal.

The accepted way to get around this problem is to create a module specific namespace. This reduces the number of potential conflicts. E.g:

 var Benno = {};

 Benno.postObject = function postObject(port, object) {
   return port.postMessage(JSON.stringify(object));
 }

 Benno.postObject(port, {"op" : "test"});

Now, this avoids polluting the global namespace (except for the single Benno object). So, it would have to come out above the prototype extension approach. Now, we should consider if it is possible to play any namespace tricks with the prototype approach. It might be nice to think we could do something like:

  MessagePort.prototype.Benno = {}
  MessagePort.prototype.Benno.postObject = function postObject(object) {
    return this.postMessage(JSON.stringify(object));
  }

  port.Benno.postObject(object);

but this doesn’t work because of the way in which methods and the this object work. this in the function ends up referring to the Benno module object, rather than the MessagePort object.

Even assuming this did work, the function approach has some additional benefits. If the user wants to reduce the typing they can do something like:

   var $B = Benno;
   $B.postObject(port, object);

or even,

  var $P = Benno.postObject;
  $P(port, object);

The other advantage of this scheme is that for someone debugging the code it should be much more obvious where to look for the code and to understand what is happening. If you were reading code and saw Benno.postObject(port, object), it would be much more obvious where the code came from, and where to start looking to debug things.

So, in conclusion, the best approach is also the simplest: just write a function. (But put it in a decent namespace first). Sure instance.operation(args) looks nicer than operation(instance, args), but in the end the ability to namespace the function, along with the advantage of making a clear distinction between built-in and added functionality means that the latter solution wins to day in my eyes.

If you have some other ideas on this I’d love to hear them, so please drop me an e-mail. Thanks to Nicholas for his insights here.

HTML5 FileApi and Jpeg Meta-data

Wed, 30 Dec 2009 10:03:17 +0000
tech html5 javascript jpeg exif

I’m really impressed with the way the HTML5 spec is going, and the fact that it is quickly going to become the default choice for portable application development.

One of the lastest additions to help support application development is the File API. This API enables a developer to gain access to the contents of files locally. The main new data structure that a developer if provided with is a FileList objects which represents an array of File objects. FileList objects can be obtained from two places; input form elements and from drag & drop DataTransfer objects.

Based on this latest API, I’ve created a simple library, JsJpegMeta for parsing Jpeg meta data.

I’ve hacked together a example that demonstrates the library. Just select a JPEG file from the form, or drag a JPEG file onto the window. For large JPEG files you might need to be a little bit patient, as it can be a little slow. This slowness, suprisingly, doesn’t appear to be the Javascript part, but rather Firefox’s handling of large data: URLs and JPEG display in general.

The rest of this post goes into some of the details. Unfortunately only Firefox 3.6 supports these new APIs right now.

Using the File API

Here is an example of how to get access to a FileList. When the user chooses a file, it calls the Javascript function loadFiles. (Assuming you have already defined that function).

  <form id="form" action="javascript:void(0)">
    <p>Choose file: <input type="file" onchange="loadFiles(this.files)" /></p>
  </form>

A File object just provides a reference to a file; to actually get some data out of the file you need to use a FileReader object. The FileReader object provides an asynchronous API for reading the file data into memory. Three different methods are provided by the FileReader object; readAsBinaryString, readAsText and readAsDataURL. A callback, onloadend, is executed when the file has been read into memory, the data is then available via the result field.

Here example of what the loadFiles function might look like:

function loadFiles(files) {
    var binary_reader = new FileReader();

    binary_reader.file = files[0];
    
    binary_reader.onloadend = function() {
        alert("Loaded file: " + this.file.name + " length: " + this.result.length);
    }

    binary_reader.readAsBinaryString(files[0]);
    
    $("form").reset();
}

Note the $("form").reset(); clears the input form.

Drag & Drop

Forms are not the only way to get a FileList, you can also get files from drag and drop event. You need to handle three events; dragenter, dragover and drop.

<body ondragenter="dragEnterHandler(event)" ondragover="dragOverHandler(event)" ondrop="dropHandler(event)">

The default handling of these are fairly striaght forward:

function dragEnterHandler(e) { e.preventDefault(); }
function dragOverHandler(e) { e.preventDefault(); }
function dropHandler(e) {
    e.preventDefault();
    loadFiles(e.dataTransfer.files);
}

Parsing files

The interesting thing here is the readAsBinaryString, when this method is used result ends up being a binary string. This is pretty new because, as far as I know, there hasn’t really been a good way to access binary data in Javascript before. Each character in the binary string represents a byte, and has a character code in the range [0..255].

This is great, because it means that we can parse binary strings locally, without having to upload files to a server for processing. Unfortunately there isn’t a great deal of support for handling binary data in Javacript; there isn’t anything like Python’s struct module.

Luckily it isn’t too hard to write something close to this. Mostly we wanted to parse unsigned and signed integers of arbitrary length. To be useful, we need to handle both little and big endianess. A very simple implementation of parsing an unsigned integer is:

    function parseNum(endian, data, offset, size) {
	var i;
	var ret;
	var big_endian = (endian === ">");
	if (offset === undefined) offset = 0;
	if (size === undefined) size = data.length - offset;
	for (big_endian ? i = offset : i = offset + size - 1; 
	     big_endian ? i < offset + size : i >= offset; 
	     big_endian ? i++ : i--) {
	    ret <<= 8;
	    ret += data.charCodeAt(i);
	}
	return ret;
    }

endian specifies the endianess; the string literal ">" for big-endian and "<" for little-endian. (Copying the Python struct module). data is the binary data to parse. An offset can be specified to enable parsing from the middle of a binary structure; this defaults to zero. The size of the integer to parse can also be specified; it defaults to the remainder of the string.

Signed integers require a little bit more work. Although there are multiple ways of representing signed numbers, by far the most common is the two’s complement method. A function that has the same inputs as parseNum is:

    function parseSnum(endian, data, offset, size) {
	var i;
	var ret;
	var neg;
	var big_endian = (endian === ">");
	if (offset === undefined) offset = 0;
	if (size === undefined) size = data.length - offset;
	for (big_endian ? i = offset : i = offset + size - 1; 
	     big_endian ? i < offset + size : i >= offset; 
	     big_endian ? i++ : i--) {
	    if (neg === undefined) {
		/* Negative if top bit is set */
		neg = (data.charCodeAt(i) & 0x80) === 0x80;
	    }
	    ret <<= 8;
	    /* If it is negative we invert the bits */
	    ret += neg ? ~data.charCodeAt(i) & 0xff: data.charCodeAt(i);
	}
	if (neg) {
	    /* If it is negative we do two's complement */
	    ret += 1;
	    ret *= -1;
	}
	return ret;
    }

JpegMeta API

JpegMeta is a simple, pure Javascript library for parsing Jpeg meta-data. To use it include the jpegmeta.js file. This creates a single, global, module object JpegMeta. The JpegMeta module object has one public interface of use, the JpegFile class. You can use this to construct new JpegFile class instances. The input is a binary string (for example as returned from a FileReader object. An example is:

	var jpeg = new JpegMeta.JpegFile(this.result, this.file.name);

After creation you can then access various meta-data properties, categorised by meta-data groups. The main groups of meta-data are:

general
Information extracted from the JPEG SOF segment. In particular the hieght, width and colour depth.
jfif
Meta-data from the JFIF APP0 segment. This usually includes resolution, aspect ratio and colour space meta-data.
tiff
Generic TIFF meta-data extracted from the Exif meta-data APP1 segment. Includes things such as camera make and model, orientation and date-time.
exif
Exif specific meta-data extracted from the Exif meta-data APP1 segment. Includes camera specific things such as white balance, flash, metering mode, etc.
gps
GPS related information extracted from the Exif meta-data APP1 segment. Includes atitude, longitude etc.

Meta-data groups can be access directly, for example:

 var group = jpeg.gps;

A lookup table is also provided: jpeg.metaGroups. This associative array can be used to determine which meta-groups a particular jpeg file instance actually has.

The MetaGroup object has a name field, a description field and an associative array of properties.

Properties in a given group can be accessed directly. E.g:

 var lat = jpeg.gps.latitude;

Alternatively, the metaProps associative array provides can be used to determine which properties are available.

The metaProp object has a name field, description field, and also a value field.

Conclusion

The File API adds a poweful new capability to native HTML5 applications.

pexif 0.13 release

Thu, 23 Apr 2009 11:19:06 +0000
tech code pexif python

pexif is the python library for editing an image’s EXIF data. Somewhat embarrassingly, the last release I made (0.12) had a really stupid bug in it. This has now been rectified, and a new version (0.13) is now available.

Super OKL4

Sat, 21 Feb 2009 11:39:48 +0000
tech okl4

Cool! Someone in Japan seems to be porting OKL4 to the SuperH architecture.