You would think after a few hundred flights and around 300,000 miles the wonder of flying would have worn off. And to a very large extent it has. There is nothing magical or exciting being stuck in a cramped narrow seat for 12 hours, but there are definitely times when you can't help but be amazed where technology and industralisation has got us.
Taking off for the first time on the massive double decker, super jumbo, A380 is definitely one of those experiences. Despite the solid engineering and science behind it, it is still pretty amazing when something that big actually gets of the ground. The fact that this aircraft is so quite when operating ust adds to the experience.
I was lucky enough to get a window seat on the upper deck on my flight from Sydney to Singapore last week. It was comfortable, seat, there is storage right next to you which is great, and the entertainment system is freaking cool. Nice, large crisp LCD screens, and a huge range of TV shows (I watched Buffy, and Bones), movies (I finally saw Juno), and multiplayer games (I cleaned up on Texas Hold-em). All in all, Singapore still gets my vote for best airline.
The next 10 flights (Singapore-Frankfurt, Frankfurt-Marseille, Marsielle-Munich, Munich-Berlin, Berlin-Copenhagen, Copenhagen-Helsinki, Helsinki-Frankfurt, Frankfurt-Zürich, Zürich-Washington D.C, Washington D.C-San Francisco), were nothing to write home about, I didn't get any upgrades, I went very close to missed connections, I ran out of battery on my laptop, all the usual things that make flying fun. I really must recommend not flying through Dulles. It took around 90 minutes to get though immigration, customs, baggage recheck, and security. It looked as though they were upgrading the airport, but if you are flying Europe to west coast US I'd recommend anywhere else, except maybe Denver where you are liable to get snowed in, or Chicago where you are likely to miss your connection. In fact just try and fly direct.
Thankfully after the 8 hours to east-coast plus 6 more hours to the west-coast, I was able to look forward to flying in business class home to Sydney. I'm not sure if it was the 14 hours of flying in economy, but this has been one of my most relaxed flights ever. For some reason the flight was basically empty, the business class cabin was only half-full, and I think anyone in economy probably got a row to themselves.
But none of that would usually inspire me to bother writing. What really did it was the view from the airplane at dawn. Seeing the sun rise of the horizon when you are flying 10km above the planet it pretty amazing when you think about it.
Trying to capture the view is not easy, while shooting out the plane window is not exactly ideal, I just don't think my point-and-shoot is up to it (blame the tools). Anyway, this photo is the best of the lot, it kind of works, but in real life, the blues are bluer, the sun a deeper orange, and the view far more expansive.
I'm in San Jose at the moment for both the CELF Embedded Linux Conference and the Embedded Systems Conference (ESC). (Which are conveniently scheduled at the same time, in different places!). I'm not quite sure how much of each I'll see. I'm primarily going to at CELF, but will probably end up playing some time as booth babe at the Open Kernel Labs stand at ESC.
Most importantly there will be beer at Gordon Biersch (San Jose) on Tuesday night from around 7pm. (Not Thursday night as I may have told people previously, of course if anyone wants to meet up on Thursday as well, that works too.)
I did manage to take a quick break from work yesterday and took advantage of the awesome weather in northern California to drive down to Big Sur along Highway 1. It was some pretty spectacular scenery. Hopefully I won't have a sprained ankle next time and will be able to do some hiking.
ESC seems to bring out some fun billboards, such as this one that I saw driving outside my hotel room today.
I released a new version of pexif today. This release fixes some small bugs and now deals with files containing multiple application markers. This means files that have XMP metadata now work.
Now I just wish I had time to actually use it for its original purpose of attaching geo data to my photos.
So, I started with something reasonably straight-forward — update my blog posts so that the <title> tag is set correctly — which quickly led me down the rabbit hole of typographically correct apostrophes, Unicode, XML, encodings, keyboards and input methods. Updating my blog software took about 15 minutes, delving down the rabbit hole took about 5 hours.
So, the apostrophe. This isn’t about the
correct usage of the apostrophe. This is entirely about correctly
typesetting of the
apostrophe. Now there are lots
of
opinions on the
subject. It basically comes down to the choice between ASCII character 0x27 and
Unicode code point U+2019. Of course it just so happens
that ASCII character 0x27 is also Unicode code point U+0027, so really, this
comes down to a discussion about which Unicode code point is most appropriate
for representing the apostrophe. After way too much searching, it actually
turns out to be a really simple decision. Unicode provides the documentation
for the code points in a series of charts. The chart
C0 Controls and Basic Latin (pdf)
documents the APOSTROPHE
. It is described as:
0027 ' APOSTROPHE = apostrophe-quote (1.0) = APL quote • neutral (vertical) glyph with mixed usage • 2019 ’ is preferred for apostrophe
So, despite the fact that it is named APOSTROPHE
, it is described
as a neutral (vertical) glyph with mixed usage
, and it notes that
U+2019 is the preferred code point for apostrophe. This looks pretty conclusive
but let’s check the General
Punctuation chart:
2019 ’ RIGHT SINGLE QUOTATION MARK = single comma quotation mark • this is the preferred character to use for apostrophe
So, my conclusion is that the most appropriate character for an apostrophe is U+2019. OK, great, now I have to decide how I can actually encode this. I’m used to writing plain ASCII text documents, and U+2019 is not something I can represent in ASCII. So, since I’m mostly concerned about document I’m publishing on the interwebs, and I figured that character entities refernences would be the way to go. So there appears to be a relevant entity:
<!ENTITY rsquo CDATA "’" -- right single quotation mark, U+2019 ISOnum -->
Of course is seems a little odd using &rsquot; to represent an
apostrophe, but so be it. Now in XML a new character entity is defined ',
which you might on first glance think is exactly what you want, but on second
glance, it isn’t, since it maps to U+0027, not U+2019. ' is mostly
used for escaping strings which are enclosed in actual ' characters.
So, ' is out. XML itself only defines character entities for
ampersand, less-than, greater-than, quotation mark, and apostrophe. XHTML
however defines the rest of the character entities that you have come to
love and expect from HTML, so &rsquot; is still in, as long as it is
used in an XHTML document, not a general XML document.
So I was set on just using &rsquot;, and I sent my page off
to the validator. This went
fine, except it pedantically pointed out that I had not defined
a character encoding, and really I should. Damn, now I need to think
about character encoding too. OK, so what options are there? Well,
IANA, has a nice list
of official names for characters sets that may be used in the internet
.
ANSI_X3.4-1968 (a.k.a US-ASCII, a.k.a ASCII) had to be a big first contender, since that is basically what I had been doing for many a year, but to be honest, this seemed a little backwards. The idea of having to use numeric character references (NCRs) everytime I wanted an apostrophe seemed a little silly. Besides the W3C recommends using
an encoding that allows you to represent the characters in their normal form, rather than using character entities or NCRs
OK, so since XML spec defines that:
A character reference refers to a specific character in the ISO/IEC 10646 character set
it seems that I really should choose an encoding that can directly
encode Unicode code points. (The Unicode standard and ISO/IEC 10646
track each other.) So, what options are there for encoding Unicode?
Well it seem that one of the Unicode transformation formats
would be a good choice. But there are so many to choose from,
UTF-8,
UTF-16,
UTF-32, even UTF-9.
While UTF-9 was definitely a contender, UTF-8 seems to sanest
thing for me. For a start it seems to just-work
™ in my
editor. So, going
with UTF-8, I still end up needing to let other people know my files
are encoded in UTF-8. There appears to be a
few options for doing this, and the article goes into a long
explanation of the various different pros and cons. In the end, I just put it into
the XML prolog.
Of course the final piece of the puzzle is actually inputing
characters. OS X seems to have fairly good support for this. If you
poke around a bit in internationalisation support in system
preferences and enable Show Keyobard Viewer
, Show Character
Viewer
and Unicode Hex Input
, you should be able to works
things out.
So, I can now have lovely typographically correct apostrophes and they work great, and all is good with the world. (Except of course that this page probably renders like crap in Internet Explorer. Oh well.)
The backup files that emacs litters your filesystem with
can be a real pain. Stupid tilde files can be annoying and dangerous.
Especially since ~ does double duty of
being a short cut for your home directory. (I can't be the only
person who has accidently typed rm -fr *~ as
rm -fr * ~).
Anyway, the easy solution is to add this to your config file:
(setq backup-directory-alist '(("" . "~/.emacs.d/emacs-backup")))
In a recent
post Gernot made a comparison between nanokernels and hardware
abstraction layers (HALs). This prompted a question on the
OKL4 developers mailing list: well, couldn’t you consider a microkernel a HAL?
.
I think the logical conclusion, both theoretical and practical, is a resounding, no.
Why? Well, a microkernel is, in theory (if not always in practise) minimal. That is, it should only include things in the kernel are those pieces of code that must run in privileged mode.
So, if a microkernel was to provide any hardware abstractions, it will only be providing the abstraction that have to be in the kernel. Which really falls short of a complete hardware abstraction layer.
Now, probably the more interesting question is should the
microkernel provide any hardware abstraction
, and if so what
hardware should it be abstracting
, and what is the right
abstraction
. After starting to write some answers to these
questions I reminded myself of the complexity involved in answering
them, so I will leave these questions hanging for another post.