namedfields create performance

Mon, 01 Dec 2014 20:35:37 +0000

This is a follow-up to yesterday’s post about namedtuples.

Yesterday I mostly focussed on the performance of accessing attributes on a named tuple object, and the namedfields decorator approach that I showed ended up with the same performance as the standard library namedtuple. One operation that I didn’t consider, but is actually reasonably common is the actual creation of a new object.

My implementation relied on a generic __new__ that used the underlying _fields to work out the actual arguments to pass to the tuple.__new__ constructor:

    def __new__(_cls, *args, **kwargs):
        if len(args) > len(_cls._fields):
            raise TypeError("__new__ takes {} positional arguments but {} were given".format(len(_cls._fields) + 1, len(args) + 1))

        missing_args = tuple(fld for fld in _cls._fields[len(args):] if fld not in kwargs)
        if len(missing_args):
            raise TypeError("__new__ missing {} required positional arguments".format(len(missing_args)))
        extra_args = tuple(kwargs.pop(fld) for fld in _cls._fields[len(args):] if fld in kwargs)
        if len(kwargs) > 0:
            raise TypeError("__new__ got an unexpected keyword argument '{}'".format(list(kwargs.keys())[0]))

        return tuple.__new__(_cls, tuple(args + extra_args))

This seems to work (in my limited testing), but the code is pretty nasty (I’m far from confident that it is correct), and it is also slow. About 10x slower than a class created with the namedtuple factory function, which is just:

    def __new__(_cls, bar, baz):
        'Create new instance of Foo2(bar, baz)'
        return _tuple.__new__(_cls, (bar, baz))

As a result of this finding, I’ve changed my constructor approach, and now generate a custom constructor for each new class using eval. It looks something like:

        str_fields = ", ".join(fields)
        new_method = eval("lambda cls, {}: tuple.__new__(cls, ({}))".format(str_fields, str_fields), {}, {})

With this change constructor performance is on-par with the namedtuple approach, and I’m much more confident that the code is actually correct!

I’ve cleaned up the namedfields code a little, and made it available as part of my pyutil repo.

A better namedtuple for Python (?)

Sun, 30 Nov 2014 16:56:23 +0000

Python's namedtuple factory function is, perhaps, the most under utilised feature in Python. Classes created using the namedtuple are great because they lead to immutable object (as compared to normal classes which lead to mutable objects). The goal of this blog post isn’t to convince you that immutable objetcs are a good thing, but take a bit of a deep dive into the namedtuple construct and explore some of its shortcomings and some of the ways in which it can be improved.

NOTE: All the code here is available in this gist.

As a quick primer, let’s have a look at a very simple example:

from collections import namedtuple

Foo = namedtuple('Foo', 'bar baz')
foo = Foo(5, 10)
print(foo)

NOTE: All my examples are targeting Python 3.3. They will not necessarily work in earlier version, in particular at least some of the later ones are not to work in Python 2.7

For simple things like this example, the existing namedtuple works pretty well, however the repetition of Foo is a bit of a code smell. Also, even though Foo is a class, it is created significantly differently to a normal class (which I guess could be considered an advantage).

Let’s try and do something a little more complex. If we find ourselves needing to know the product of bar and baz a bunch of times we probably want to encapsulate rather than writing foo.bar * foo.baz everywhere. (OK this example isn’t the greatest in the history of technical writing but just bear with me on this). So, as a first example we can just write a function:

def foo_bar_times_baz(foo):
    return foo.bar * foo.baz

print(foo_bar_times_baz(foo))

There isn’t anything wrong with this, functions are great! But the normal pattern in Python is that when we have a function that does stuff on a class instance we make it a method, rather than a function. We could debate the aesthetic and practical tradeoffs between methods and functions, but let’s just go with the norm here and assume we’d prefer to do foo.bar_times_baz(). The canonical approach to this is:

class Foo(namedtuple('Foo', 'bar baz')):
    def bar_times_baz(self):
        return self.bar * self.baz


foo = Foo(5, 10)
print(foo.bar_times_baz())

Now this works but, again, we have the repetition of Foo, which bugs me. Coupled with this we are calling a function inside the class definition which is a bit ugly. There are some minor practical concerns because there are now multiple Foo classes in existance, which can at times cause confusion. This can be avoided by appending an underscore to the name of the super-class. E.g.: class Foo(namedtuple('Foo_', 'bar baz')).

An alternative to this which I’ve used from time to time is to directly update the Foo class:

Foo = namedtuple('Foo', 'bar baz')
Foo.bar_times_baz = lambda self: self.bar * self.baz
foo = Foo(5, 10)
print(foo.bar_times_baz())

I quite like this, and think it works well for methods which can be written as an expression, but not all Python programmers are particularly fond of lambdas, and although I know that classes are mutable, I prefer to consider them as immutable, so modifying them after creation is also a bit ugly.

The way that the namedtuple function works is by creating a string that matches the class definition and then using exec on that string to create the new class. You can see the string verion of that class definition by passing verbose=True to the namedtuple function. For our Foo class it looks a bit like:

class Foo(tuple):
    'Foo(bar, baz)'

    __slots__ = ()

    _fields = ('bar', 'baz')

    def __new__(_cls, bar, baz):
        'Create new instance of Foo(bar, baz)'
        return _tuple.__new__(_cls, (bar, baz))

    ....

    def __repr__(self):
        'Return a nicely formatted representation string'
        return self.__class__.__name__ + '(bar=%r, baz=%r)' % self

    .....

    bar = _property(_itemgetter(0), doc='Alias for field number 0')

    baz = _property(_itemgetter(1), doc='Alias for field number 1')

I’ve omitted some of the method implementation for brevity, but you get the general idea. If you have a look at that you might have the same thought I did: why isn’t this just a sub-class?. It seems practical to construct:

class NamedTuple(tuple):
    __slots__ = ()

    _fields = None  # Subclass must provide this

    def __new__(_cls, *args):
        # Error checking, and **kwargs handling omitted for brevity
        return tuple.__new__(_cls, tuple(args))

    def __repr__(self):
        'Return a nicely formatted representation string'
        fmt = '(' + ', '.join('%s=%%r' % x for x in self._fields) + ')'
        return self.__class__.__name__ + fmt % self

    def __getattr__(self, field):
        try:
            idx = self._fields.index(field)
        except ValueError:
            raise AttributeError("'{}' NamedTuple has no attribute '{}'".format(self.__class__.__name__, field))

        return self[idx]

Again, some method implementations are omitted for brevity (see the gist for gory details). The main difference to the generated class is making use of _fields consistently rather than hard-coding some things, and the necessity of a __getattr__ method, rather than hardcoded properties.

This can be used something like this:

class Foo(NamedTuple):
    _fields = ('bar', 'baz')

This fits the normal pattern for creating classes much better than a constructor function. It is two lines rather than the slightly more concise one-liner, but we don’t need to duplicate the Foo name so that is a win. In addition, when we want to start adding methods, we know exactly where they go.

This isn’t without some drawbacks. The biggest problem is that we’ve just inadvertently made our subclass mutable! Which is certainly not ideal. This can be rectified by adding a __slots__ = () in our sub-class:

class Foo(NamedTuple):
    __slots__ = ()
    _fields = ('bar', 'baz')

At this point this approach no longer looks so good. Some other minor drawbacks are that there is no checking that the field names are really valid in anyway, which the namedtuple approach does correctly. The final drawback is on the performance front. We can measure the attribute access time:

from timeit import timeit

RUNS = 1000000

direct_idx_time = timeit('foo[0]', setup='from __main__ import foo', number=RUNS)
direct_attr_time = timeit('foo.bar', setup='from __main__ import foo', number=RUNS)
sub_class_idx_time = timeit('foo[0]', setup='from __main__ import foo_sub_class as foo', number=RUNS)
sub_class_attr_time = timeit('foo.bar', setup='from __main__ import foo_sub_class as foo', number=RUNS)


print(direct_idx_time)
print(direct_attr_time)
print(sub_class_idx_time)
print(sub_class_attr_time)

The results are that accessing a value via direct indexing is the same for both approachs, however when accessing via attributes the sub-class approach is 10x slower than the original namedtuple approach (which was about factor 3x slower than direct indexing).

Just for kicks, I wanted to see how far I could take this approach. So, I created a little optimize function, which can be applied as a decorator on a namedtuple class:

def optimize(cls):
    for idx, fld in enumerate(cls._fields):
        setattr(cls, fld, property(itemgetter(idx), doc='Alias for field number {}'.format(idx)))
    return cls


@optimize
class FooSubClassOptimized(NamedTuple):
    __slots__ = ()
    _fields = ('bar', 'baz')

    def bar_times_baz(self):
        return self.bar * self.baz

The optimize goes through and adds a property to the class for each of it’s fields. This means that when an attribute is accessed, it can short-cut through the property, rather than using the relatively slow __getattr__ method. This result in a considerable speed-up, however performance was still about 1.2x slower than the namedtuple approach.

If anyone can explain why this takes a performance hit I’d appreciate it, because I certainly can’t work it out!

So, at this point I thought my quest for a better namedtuple had come to an end, but I thought it might be worth pursuing the decorator approach some more. Rather than modifying the existing class, instead we could use that as a template and return a new class. After some iteration I came up with a class decorator called namedfields, which can be used like:

@namedfields('bar', 'baz')
class Foo(tuple):
    pass

This approach is slightly more verbose than sub-classing, or the namedtuple factory however, I think it should be fairly clear, and doesn’t include any redundancy. The decorator constructs a new class based on the input class, however it adds the properties for all the fields, ensures __slots__ is correctly added to the class, and attaches all the useful NamedTuple methods (such as _replace and __repr__) to the new class.

Classes constructed this way have identical attribute performance as namedtuple factory classes.

The namedfields function is:

def namedfields(*fields):
    def inner(cls):
        if not issubclass(cls, tuple):
            raise TypeError("namefields decorated classes must be subclass of tuple")

        attrs = {
            '__slots__': (),
        }

        methods = ['__new__', '_make', '__repr__', '_asdict',
                   '__dict__', '_replace', '__getnewargs__',
                   '__getstate__']

        attrs.update({attr: getattr(NamedTuple, attr) for attr in methods})

        attrs['_fields'] = fields

        attrs.update({fld: property(itemgetter(idx), doc='Alias for field number {}'.format(idx))
                      for idx, fld in enumerate(fields)})

        attrs.update({key: val for key, val in cls.__dict__.items()
                      if key not in ('__weakref__', '__dict__')})

        return type(cls.__name__, cls.__bases__, attrs)

    return inner

So, is the namedfields decorator better than the namedtuple factory function? Performance wise things are on-par for the common operation of accessing attributes. I think from an aesthetic point of view things are on par for simple classes:

Foo = namedtuple('Foo', 'bar baz')

# compared to

@namedfields('bar', 'baz')
class Foo(tuple): pass

namedtuple wins if you prefer a one-line (although if you are truly perverse you can always do: Foo = namedfields('bar', 'baz')(type('Foo', (tuple, ), {})). If you don’t really care about the one vs. two lines, then it is pretty line-ball call. However, when we compare what is required to add a simple method the decorate becomes a clear winner:

class Foo(namedtuple('Foo', 'bar baz')):
    def bar_times_baz(self):
        return self.bar * self.baz

# compared to

@namedfields('bar', 'baz')
class Foo(tuple):
    def bar_times_baz(self):
        return self.bar * self.baz

There is an extra line for the decorator in the namedfields approach, however this is a lot clearer than it being forced in as a super class.

Finally, I wouldn’t recommend using this right now because my bring in a new dependency for something that provides only marginal gains over the built-in functionality isn’t really worth it, and this code is really only proof-of-concept at this point in time.

I will be following up with some extensions to this approach that covers off some of my other pet-hates with the namedtuple approach: custom constructors and subclasses

Inversion of Meaning

Sun, 15 Dec 2013 15:03:01 +0000

This is a rant; if you don't like rants, well just don't read it.!

Computer science is full of fuzzy terms, and cases where multiple concepts are mapped to the same term (shall we have the argument of whether an operating system is just the thing running in privileged mode or also includes any of the libraries? and whether we should call that thing Linux or GNU/Linux?). Usually I can just deal with it, there generally isn't a right answer, or if there ever was it has been lost in the mists of time. And of course, language is a living thing, so I get that the meaning of words change over time. But, never have I seen a term's meaning been so completely eviscerated as what has happened to the previously useful and meaningful term Inversion of Control, which has become completely conflated with the term Dependency Injection.

Wikitionary provides a relatively servicable definition of the term:

Abstract principle describing an aspect of some software architecture designs in which the flow of control of a system is inverted in comparison to the traditional architecture.

A trivial example would be reading some data from the console. The usual flow of control would be something like:

function read_input()
{
   input = /* some magic to get the input */
   return input
}

function menu()
{
   loop
   {
       input = read_input()
       /* Do something with the input */
   }
}

function main()
{
    menu()
}

The normal flow of control is that the application passes control to the read_input() function, the function returns with some result and then the application does something with the input. Now we can invert this by doing something like:

function input_reader(handler)
{
    loop
    {
        input = /* some magic to get the input */
        handler(input)
    }
}

function menu_handler(input)
{
     /* Do something with the input */
}

function main()
{
    input_reader(menu_handler);
}

In this example rather than the application code calling the read_input() function, the input reading code calls the application. That right there is the inversion!

So breaking it down; the control is normally passed from an application to a library (via a function call). If we invert this, i.e.: put in the opposite order, then the library passes control to the application and we can observe an Inversion of Control.

Such a pattern is also called the Hollywood Principle: Don't call us; we'll call you. (Don't ever accuse computer scientists of lacking a sense of humour!).

Of course there is nothing mind-blowing about structuring code like this. Clark discussed structuring system using upcalls [PDF] almost 30 years ago.

So, inversion of control isn't really a big concept. Pretty simple, straightforward, and probably not deserving of an 18 character phrase, but it certainly means something, and that something is a useful concept to use when talking about system design, so needs some kind of label.

A real-word example can be found in the C standard library, where qsort calls in to the application to perform a comparison between elements. Another example is the Expat XML parser which calls back in to the application to handle the various XML elements. Glib's main loop is probably another good example. I think it is interesting to note that size or scope of an inversion of control can vary. In the case of qsort, the inversion is very localised, by comparison in the Glib example, basically the whole program exhibits inversion of control.

The one quibble I would have with the Wikitionary definition is using the use of the word principle. To me a principle implies some desirable design aspect that you generally strive towards. For example, the Principle of Least Privilege or the Separation of Concerns principle.

Inversion of Control is not really something you strive towards. In fact it can often lead to more complex designs. For example, when parsing XML it is generally simpler to call a function that converts the XML data into a document object model (DOM), and then make calls on the DOM, compared to an Expat or SAX style approach. Of course, there are advantages to the Expat/SAX approach such as reduced memory footprint. The point here is that Inversion of Control is a consequence of other design goals. No one starts out with the goal of: let's design this system so that it exhibits inversion of control (well, OK, maybe node.js did!). Rather it is a case of: so that we could achieve some specific design goal, we had to design this part of the system so that it had inversion of control.

If you didn't really like my description of Inversion of Control, I can recommend reading Martin Fowler's article, which I think does a pretty good job of explaining the concept.

However, the final paragraph of the article starts to point to the focus of this rant:

There is some confusion these days over the meaning of inversion of control due to the rise of IoC containers; some people confuse the general principle here with the specific styles of inversion of control (such as dependency injection) that these containers use. The name is somewhat confusing (and ironic) since IoC containers are generally regarded as a competitor to EJB, yet EJB uses inversion of control just as much (if not more).

This starts to introduce new terms such as IoC containers (see, I told you 18 characters was too much, look how swiftly it has been reduced to a mere 3 characters!) and dependency injection. When I first read this, I had no idea what Martin was talking about. What is an IoC container? What is dependency injection? (I'm ashamed to say I did know that EJB was an Enterprise JavaBean.) And my curiosity in to this final paragraph let me on a magical journey of discovery where the more I read the more I thought that I must have been confused about what IoC is (and I'm pretty sure if been using the term off and on for over 11 years now, so this was a bit of a concern for me!). So, if you are still reading you get to come along with me on a ride of pedantry and computer science history as I try to work out why everyone else is wrong, because I know I can't be wrong!

Let's start with Wikipedia. The first paragraph of the Inversion of Control article on Wikipedia defines IoC as:

a programming technique, expressed here in terms of object-oriented programming, in which object coupling is bound at run time by an assembler object and is typically not known at compile time using static analysis.

I'm really not sure what sort of mental gymnastics are required to get from my definition to a definition such as that. They are certainly beyond my capabilities. But it's not just Wikipedia.

The top answer on StackOverflow question What is Inversion of Control? isn't much better:

The Inversion of Control (IoC) and Dependency Injection (DI) patterns are all about removing dependencies from your code.

Sorry, I don't get it. Inversion of Control is all about, you know inverting control. At least this explicitly conflates IoC with this dependency injection thing, which gives us some hope of tracing through time why these things seems so intertwined.

And just in case you thought it was just Wikipedia and Stackoverflow, you can pick just about any blog post on the subject (excepting Martin's previously linked above, and I guess this one, but that is a bit self-referential), and you'll find similar description. Just to pick a random blog post on the subject:

Inversion of Control (IoC) is an approach in software development that favors removing sealed dependencies between classes in order to make code more simple and flexible. We push control of dependency creation outside of the class, while making this dependency explicit.

Again this recurring theme of dependencies. At least all these wrong definitions are more or less consistently wrong. Somewhere in the history of software development dependencies and inversion got all mixed up in the heads of developers. It is interesting to note that most of the examples and posts on the subject are very object-oriented (mostly Java).

To get to the bottom of this I guess I need to learn me some of this dependency injection terminology (and learn how to spell dependency without a spell-checker... hint, there is no a!). Given the quality of the IoC article, I'm skipping the Wikipedia article. The canonical source (by proclamation of the Google mediated hive-mind) is Martin Fowler's article. His article on IoC was pretty good (apart from a confusing last paragraph), so this looks like a great place to start.

In the Java community there's been a rush of lightweight containers that help to assemble components from different projects into a cohesive application. Underlying these containers is a common pattern to how they perform the wiring, a concept they refer under the very generic name of "Inversion of Control". In this article I dig into how this pattern works, under the more specific name of "Dependency Injection"

Aha! I knew it was Java's fault! OK, so, about 10 years ago (2003), the Java community discovered some new pattern, and were calling it Inversion of Control, and Martin's article is all about this pattern, but he calls it by a different name Dependency Injection (DI). Well, I guess that explains the conflation of the two terms at some level. But I really want to understand why the Java community used the term Inversion of Control in the first place, when, as far as I can tell, it had a relatively clear meaning pre-2003.

So, these lightweight containers are mentioned as originators of this pattern, so they look like a good place to start investigating. One of those mentioned in Martin's article is PicoContainer. Unfortunately that goes to a deadlink. Thankfully, through to the awesomeness that is the Wayback Machine we can have a look at the PicoContainer website circa 2003. That website mentioned that it honors the Inversion of control pattern (IoC) but doesn't really provide any detail on what it considers the IoC pattern to be. Thankfully, it has a history section that attempts to shed some light on the subject: Apache's Avalon project has been selling the IoC pattern for many years now.

OK, great, so as of ~2003 this Avalon project had already been talking about the IoC pattern for many years. The Apache Avalon project has (unfortunately?) closed, but there seems to still be references and docs in various places. On bit of the project's documentation is a guide to inversion of control.

It introduces IoC as: One of the key design principles behind Avalon is the principle of Inversion of Control. Inversion of Control is a concept promoted by one of the founders of the Avalon project, Stefano Mazzocchi.

It goes on to provide a very different description of IoC to my understanding:

Chain of Command This is probably the clearest parallel to Inversion of Control. The military provides each new recruit with the basic things he needs to operate at his rank, and issues commands that recruit must obey. The same principle applies in code. Each component is given the provisions it needs to operate by the instantiating entity (i.e. Commanding Officer in this analogy). The instantiating entity then acts on that component how it needs to act.

The concrete example provided is:

class MyComponent
    implements LogEnabled
{
    Logger logger;

    public enableLogging(Logger newLogger)
    {
        this.logger = newLogger;
    }

    myMethod()
    {
        logger.info("Hello World!");
    }
}

With an explanation:

The parent of MyComponent instantiates MyComponent, sets the Logger, and calls myMethod. The component is not autonomous, and is given a logger that has been configured by the parent. The MyComponent class has no state apart from the parent, and has no way of obtaining a reference to the logger implementation without the parent giving it the implementation it needs.

OK, I guess with such a description I can start to see how this could end up being called inversion of control. The normal order of things is that a class creates its dependents in its constructor, however this pattern changes this so that the caller provides the dependents. This doesn't really feel like inversion to me, but I guess it could be considered this. And equally I don't really think there is control as such involved here, maybe control of dependencies?

I think you could claim that myMethod in the example exhibits some localised inversion of control when it calls the logger, but that isn't really what is identified by the any of the explanatory text.

Anyway, this isn't really a particularly pleasing place to stop; there must be more to this story.

Some extensive research work (OK, I just used google), led me to the aforementioned Stefano Mazzocchi's blog. On which he has an insightful post about the origins of the use of the term IoC within the Avalon community.

I introduced the concept of IoC in the Avalon community in 1998 and this later influenced all the other IoC-oriented containers. Some believed that I was the one to come up with the concept but this is not true, the first reference I ever found was on Michael Mattson’s thesis on Object Oriented Frameworks: a survey on methodological issues.

At last! The origin of the term, or at least the point at which is became popular. I'll quote the same paragraph from the thesis that Stefano did:

The major difference between an object-oriented framework and a class library is that the framework calls the application code. Normally the application code calls the class library. This inversion of control is sometimes named the Hollywood principle, “Do not call us, we call You”.

Which brings things back full-circle to the start, because this certainly matches my understanding of things. What I fail to understand is how you get from this quote to the military chain of command point of view.

Stefano continues (emphasis added):

Now it seems that IoC is receiving attention from the design pattern intelligentia (sic): Martin Fowler renames it Dependency Injection, and, in my opinion, misses the point: IoC is about enforcing isolation, not about injecting dependencies.

I think this is really the crux of the issue. I think there is a massive leap being made to get to any claim that IoC is about enforcing isolation. I don't think the previous uses of the term imply this at all. Certainly not the quoted thesis.

Doing some more digging, there may be some justification from John Vlissides column Protection, Part I: The Hollywood Principle. The column describes a C++ object-oriented design for a filesystem that has some protection and enforcement. (As a side note the idea of providing any level of protection enforcement through language mechanisms is abhorrent, but lets just accept that this is a desirable thing for the moment.) In any case the pertinent quote for me is:

The Template Method pattern leads to an inversion of control known as the "Hollywood Principle," or, "Don't call us; we'll call you."

The inversion of control is a consequence of using the Template Method pattern, which may themselves have something to say about enforcing some isolation, but IoC itself is a consequence, not an aim.

So lets recap where we are right now. We have one view of IoC which is just a simple way of what describing happens when you flip things around so that a framework calls the application, rather than an application calling the library. This isn't good or bad, it is just a way of describing some part of the a software systems design.

Secondly we have the view promoted by originally it would seem promoted by Stefan, but subsequently repeated in many places, where IoC is a design principle, with an explicit goal of enforcing isolation.

I find it hard to want to use IoC as a design principle. We already have a well established term for a design principle that is all about isolation: separation of concern. Which goes back to Dijkstra a good forty or so years ago.

Given this, I think IoC should only be used to describe an aspect of design where the flow of control is inverted when compared to the traditional architecture. If I can be so bold, this is the correct definition. Other definitions are wrong!

(And just to be clear, there is nothing wrong with dependency injection, it's a great way of structuring code to make it more reusable, just don't call it inversion of control!

Debugging the GDB remote protocol

Sun, 15 Sep 2013 20:00:25 +0000
embedded tech gdb

If you main use of GDB is for debugging embedded devices you can't really go too long without encountering the GDB remote protocol. This is the protocol used to communicate between the GDB application and the debugger, usually over either TCP or serial.

Although this protocol is documented, it is not always clear exactly when which packets are actually used and when. Not knowing which packets to expect makes implementing the debugger side of things a little tricky. Thankfully there is a straight forward way to see what is going on:

$ set debug remote 1

Black Magic Probe ARM semihosting

Sun, 15 Sep 2013 19:12:48 +0000
tech arm embedded

If you are developing stuff on ARM Cortex M-series devices and need a reasonably priced debugger, I'd have to recommend the Black Magic Probe. For about 70 bucks you get a pretty fast debugger that directly understands the GDB remote protocol. This is kind of neat as you don't need to have run some local server (e.g.: stlink); the debugger appears as a standard CDC ACM USB serial device.

The thing about it which is pretty cool is that the debugger firmware is open source, so when you suspect a bug in the debugger (which, just like compiler bugs, does happen in the real world) you can actually go and see what is going on and fix it. It also means you can add features when they are missing.

Although the debugger has some support for ARM semihosting, unfortunately this support is not comprehensive, which means if you use an unsupported semihosting operation in your code you end up with a nasty SIGTRAP rather than the operation you were expecting.

Unfortunately one of the simplest operations, SYS_WRITEC, which simply outputs a single character was missing, which was disappointing since my code used it rather extensively for debug output! But one small commit later and the debug characters are flowing again. (As with many of these things, the two lines were the easy bit, the hardest and most time consuming bit was actually installing the necessary build pre-requisites!)

pexif project moved

Sun, 30 Jun 2013 12:14:38 +0000
tech python pexif

My Python EXIF parsing library is joining the slow drift of projects away from Google code to Github. If you are intertested in contributing please send pull requests and I'll attempt to merge things in a timely manner.

As you can probably tell from the commit log, I haven’t really been actively working on this code base for a while, so if anyone out there feels like maintaining, please just fork on github, let me know, and I’ll point people in your direction.

Invalid integer constants

Sun, 10 Feb 2013 19:37:29 +0000

So, another fun fact about C. You'd think that a language would make it possible to express any value for its in-built types as a simple constant expression. Of course, you can't do that in C.

In C you can express both signed and unsigned integer of any of the integer types. For example 5 is a signed integer; 5L is a signed long, 5U is an unsigned integer. But, there is no way to expression a negative number as a constant! Despite what you might easily assume, -5 is not constant, rather it is the unary - operator applied to the constant 5. Now, generally this doesn't cause any problems. Unless of course, you care about types and care about having programs that are valid according to the standard.

So, assuming two's complement 32-bit integers, if you want an expression that has the type int and the minimum value (-2147483648), then you need to be careful. If you choose the straight-forward approach you'd write this as -2147483648, and you'd be wrong. Why? Well, the integer constant 2147483648 has the type long, since it can not be represented as an integer (the maximum value of a 32-bit integer is 2147483647). So, the type of an expression of the form - <long> is, of course long. So, what can we do instead? Well, there are lots of approaches, but probably the simplist is: -2147483647 - 1. In this case, 2147483647 can be represented as an integer, -2147483647, is still of type int, and then you can safely subtract 1.

Of course, this becomes even more interesting is you are dealing with the type long long. Assuming a two's complement, 64-bit type, then the minimum value is -9223372036854775808, but of course, you can't just write -9223372036854775808LL, since 9223372036854775808LL can't be represented by the type long long (and assuming the implementation doesn't define any extended integer types), this would actually be an invalid C program.

with fail():

Sun, 20 Jan 2013 10:19:59 +0000

So, yesterday's article on with turned out the be a bit wrong. As lesson in reading the explanatory text and not just the example code. (Or possibly a lesson in providing better example code!).

So, I gave an example along the lines of:

@contextmanager
def chdir(path):
    # This code is wrong. Don't use it!
    cwd = os.getcwd()
    os.chdir(path)
    yield
    os.chdir(cwd)

Now, this appears to mostly work. The problem occurs when you have some code like:

with chdir('/'):
    raise Exception()

In this case the final cleanup code (os.chdir(cwd)) will not be executed, which is almost certainly not what you want. The correct was to write this is:

@contextmanager
def chdir(path):
    cwd = os.getcwd()
    os.chdir(path)
    try:
        yield
    finally:
        os.chdir(cwd)

Writing it this way the final os.chdir(cwd) will be executed, and the exception will still be propagated. So I think this kind of sucks, because I'd really have preferred it to have this behaviour by default. To this end I've created a simplecontextmanager function decorator. Basically you can use it like so:

@simplecontextmanager
def chdir(path):
    cwd = os.getcwd()
    os.chdir(path)
    yield
    os.chdir(cwd)

Looks pretty much like the first example, however it will have the semantics of the second example. Now clearly the simplecontextmanager doesn't provide the whole range of flexibility that contextmanager does, but for simple code it is more straightforward.

My pyutil git repo has been updated to have working version of the chdir, umask and update_env context managers, and also contains the source for the new simplecontextmanager.

Moral of the story, test your code and read the docs carefully!

with awesome:

Sat, 19 Jan 2013 19:38:43 +0000

Update: This article has some errors, that are correct in a new article.

I'm only like 8 years late to the party on this one, but I've got to say Python's with statement is pretty awesome. Although I've used the functionality off-and-on I'd mostly avoided writing my own context managers as I thought it required classes with magic __enter__ and __exit__ methods. And besides, my other main language is C, so you know, I'm used to releasing resources when I need to.

Anyway, I finally decided that I should actually learn how to make a context manager, and it turns out to be super easy thanks to Python yield statement. So, some truly simple examples that will hopefully convince you to have a go at make your own context managers.

First up, a simple chdir context manager:

@contextmanager
def chdir(path):
    cwd = os.getcwd()
    os.chdir(path)
    yield
    os.chdir(cwd)

If you are doing a lot of shell-style programming in Python this comes in handy. For example:

with chdir('/path/to/dosomething/in'):
    os.system('doit')

A lot simpler than manually having to save and restore directories in your script. Along the same lines a similar approach for umask:

@contextmanager
def umask(new_mask):
    cur_mask = os.umask(new_mask)
    yield
    os.umask(cur_mask)

Again, this avoids having to do lots of saving and restoring yourself. And, the nice thing is that you can easily use these together:

with umask(0o002), chdir('/'):
     os.system('/your/command')

Code for these examples is available on my GitHub page.

An argument against stdint.h

Fri, 18 Jan 2013 11:44:09 +0000

I've only been programming C for about 15 years, so for the vast majority of that time I've been using C99. And most of that time has been spent writing low-level code (operating systems, drivers, embedded systems). In these kind of systems you care a lot about the exact sizes of types, since memory and cache are important as so is mapping variables to memory mapped registers. For this reason having a standard way of declaring variables of a certain bitsize struck me as a pretty smart thing to do, so I've almost always used stdint.h on my projects. And in the rare cases where it wasn't available used some close equivalent. However, there is an argument to be made against using them (an extension of the general argument against typedefs). The core of the argument against is that it hides information from the programmer that is vital in constructing correct programs.

Let's start with some simple C code:

unsigned int a;
unsigned int b;

The question is, what is the type of the expression a + b. In this case, things are fairly straight-forward. The results expression has the type unsigned int.

Slightly trickier:

unsigned short a;
unsigned short b;

In this case due to integer promotion the operands are promoted to int and therefore the resulting expression is also of an type int. Integer promotion is a little obscure, but relatively well known. Most of the time this isn't really a problem because if you assigned the expression back to a variable of type unsigned short the effect will be the same. Of course if you assign it to an unsigned int, then you are going to be in trouble!

So nothing really new there. There is some obscurity, but with local knowledge (i.e: just looking at that bit of code), you can tell with complete accuracy wha The problem occurs when you have something like:

uint16_t a;
uint16_t b;

So, now what is the type of the expression a + b? The simple answer is it depends. The only way to know the answer to this question is if you know what the underlying type of uint16_t is. And this is a problem, because the correctness of your program depends on being able to answer that question correctly. The unfortunate result of this is that in an attempt to make code more portable across different platforms (i.e.: by not relying on the exact sizes of types), you end up in a situation where the correctness of your program still depends on the underlying platform, although now in a nasty obscure manner. Win!

Of course, most of the time this doesn't matter, but in the times where it does matter it is certainly a problem. Unfortunately, I don't have a good suggestion on how to avoid this problem, other than careful coding (and I really hate having to rely on careful coding just to essentially get type safety correct). If you've got a good suggestion please let me know in the comments.

Of course, I think I will still continue to use stdint.h despite these problems, however it is certainly something that C programmers should be aware of when using these types.