Un-shortening twitter URLs

Sun, 08 Nov 2009 17:29:55 +0000

Introduction

I got a little annoyed at twitter the other day for gratiutiously tinyurl-ing a link, even when my message was under the magic 140 characters. Anyway, as far as I can tell, there isn’t really anyway to avoid this.

As it happens there has been quite a kerfuffle about this in the blog-o-sphere of late, with Josh Schachter (founder of del.icio.us) discussing some of the problems with URL shorteners. While, I agree with most of what is written there, I think some of his points are a little over the top. Personally, i don’t think that URL shorteners are evil, but they can certainly be annoying. I regularly mouseover links to see where they are going, and like the visual feedback of seeing visited links, both of which are broken by URL shorteners.

So, for my problem, there is a pretty easy solution, somehow expand the URL, and show that instead of the shortened URL. There are clearly already solutions to solve this, but I was interested in seeing if this could be done purely within the browser, using JavaScript and standard (or proposed standard) DOM APIs.

Overall approach

Now, in general, these URL shortening services do the “right thing”, and provide a 301 permanent redirect response. So my basic thinking is something like:

  1. get the tweets from twitter
  2. find all the URLs in the tweets
  3. for each URL, do an HTTP HEAD request
  4. if the response code is 301, replace the link text (and href) with the response location.

.. and the plan is to try and do this on the client. Now, if you’ve done something like this before, you can probably already guess all the pitfalls!

Render the tweet data

Twitter quite conveniently provides its data in a variety of useful machine readable formats. The most useful for my purposes here is the twitter XML format. Now, I’m leaving it as an exercise for the reader how to get an (authenticated) bit of twitter XML into the client at run-time. For now, I’ve downloaded a my latest timeline in XML format and made it available at http://benno.id.au/twit/twit.xml.

Now, in these days of JSON and JavaScript, no-one much seems to like XSLT langauge, but I’m quite partial to it, and as far as I’m concerned it is the best way to take one XML document (the tweet timeline) and convert it into another XML document (the HTML output). (Technically, I’ll be converting it into an XML document fragment.)

I’ll leave out how to create an XSLT processor in Javascript as an exercise for the reader, but the XSLT script itself is of some interest. At least I think so!

Now, for the most part converting the Twitter status element into an appropriate <div> and <p> tags is straight forward:

<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="html" />
<xsl:template match="statuses">
<h1>Tweet!</h1>
    <xsl:apply-templates/>
</xsl:template>

<xsl:template match="status">
<div class="tweet">
  <div class="text">
     <p><xsl:value-of select="text" /></p>
  </div>
  <div class="meta">
    <span><xsl:value-of select="created_at" /> from <xsl:value-of select="source"/></span>
  </div>
</div>
</xsl:template>
</xsl:stylesheet>

What this script basically does is find a tag <statuses> and output the heading tag. The for each <status> tag, it generates a div with the actual tweet. Now this works pretty well, but, the tweets are stored in plain text, without the URLs marked up in anyway! E.g:

<status>
  <created_at>Thu Apr 23 08:06:49 +0000 2009</created_at>
  <id>1592566033</id>
  <text>I can't believe the successor of the Radius protocol is the diameter protocol: http://tinyurl.com/c3zb2v</text>
  <source>web</source>
  ....
</status>

This means the script currently only prints out URLs with no link markup. Unfortunately the XSLT language doesn’t have very powerful inbuilt string handling functions. Fortunately the XSLT language is pretty general purpose (and turing complete. Confusingly, functions are called templates, and the syntax to invoke a function is far from convenient, but it is relatively straight forward to parse a string to extract and markup links.

So, we write a function, err, template, called parseurls. This template takes a single parameter called text. It then outputs this text with anything that looks like a URL replace with <a href="URL">URL</>. For this proof-of-concept, anything that starts with http:// is going to count as a URL, which works relatively well in practise.

The basic algorithm is to split the string into three parts: before-url, url, after-url. before-url is simply output as-is, url is output as described previously. If there is an after-url part, then we recursively call the template with this data.

<xsl:template name="parseurls">
  <xsl:param name="text"/>
  <xsl:choose>
    <xsl:when test="contains($text, 'http://')">
      <xsl:variable name="after_scheme" select="substring-after($text, 'http://')" />
      <xsl:value-of select="substring-before($text, 'http://')"/>
      <xsl:choose>
	<xsl:when test="contains($after_scheme, ' ')">
	  <xsl:variable name="url" select="concat('http://', substring-before($after_scheme, ' '))" />
	  <xsl:call-template name="linkify"><xsl:with-param name="url" select="$url"/></xsl:call-template> 
	  <xsl:text> </xsl:text>
	  <xsl:call-template name="parseurls">
	    <xsl:with-param name="text" select="substring-after($after_scheme, ' ')"/>
	  </xsl:call-template>
	</xsl:when>
	<xsl:otherwise>
	  <xsl:variable name="url" select="concat('http://', $after_scheme)"/> 
	  <xsl:call-template name="linkify"><xsl:with-param name="url" select="$url"/></xsl:call-template>
	</xsl:otherwise>
      </xsl:choose>
    </xsl:when>
    <xsl:otherwise>
      <xsl:value-of select="$text"/>
    </xsl:otherwise>
  </xsl:choose>
</xsl:template>

So, through the power of functional programming we can transform the raw tweet XML into a useful HTML document. Unfortunately making HTTP request is beyond the scope of XSLT, so for the next step, we need to use JavaScript.

Replacing URLs

OK, the next step is to go and change the links in the document. The goal is to find any links that have been shortened and replace them with the true destination.

Finding the links in the document is relatively striaght-forward, something like: var atags = document.getElementsByTagName("body")[0].getElementsByTagName("a");, does the trick. Now the aim is to go and see if any of these URL return a redirect, and then update the element. What I want to do is something like:

var atags = document.getElementsByTagName("body")[0].getElementsByTagName("a");
for (var i = 0; i < atags.length; i++) {
    var x = atags[i];
    var xmlhttp = new XMLHttpRequest();
    xmlhttp.update_x = x;
    xmlhttp.open("HEAD", x.attributes.getNamedItem("href").value);
    xmlhttp.onreadystatechange = function () {
       if (this.readyState == 4) {
           if (this.status == 301) {
               this.update_x.innerHTML = this.getResponseHeader('Location');
           }
       }
    }
    xmlhttp.send(null);
}

Basically, what this code does (or what I wish it did), is grab the href attribute out of each link, and do a HEAD request on it. Recall, a head request will get the resource headers, but not the contents. Since we only care about the reponse code, and the location header, we do the network a favour, and don’t download all the uncessary data.

Unfortunately, at this stage, we end up pretty hard against some fundamental limitations of the XMLHTTPRequest, specifically, the same-origin policy. In short, (simplified) terms, the same-origin policy means you can’t make HTTP requests except to the domain where the page resides. This is done for a very good security reason, but is slightly frustrating at this point.

Now, I wasn’t going to let a simple thing like this stop me, so I implemented a pretty simple server side proxy which let me avoid this problem. (This is a pretty unsatisfactory solution, and I’m definitely looking forward to some of the new cross-origin extensions coming out.) So, basically, we change the open call to something like:

xmlhttp.open("HEAD", "proxy?" + x.attributes.getNamedItem("href").value);

OK, so the server-side proxy gets around our cross-origin restriction, unfortunately we now hit a new problem! It turns out that XMLHttpRequest is specified so that any redirects are automatically followed by the implementation. Which means, we are stuck again, because we don’t even get a chance to find out about redirections! To get around this I hacked up my proxy, so that it converted 301 response codes into a 531 response code. (No particular reason for choosing that number over any other available reponse code).

Putting all this together gives us a solution for doing URL elongating (mostly) on the client-side. I’ve put an example up at http://benno.id.au/twit/.

Conclusions and further work

As you can see from the example, modifying the links at afer the page has already rendered can be somewhat distracting. An alternative user-interface would be to hook the mouse-over event, and in those cases display the real URL in the status-bar.

Obviously this has the potential to create a large number of network requests, but this would be no different to a desktop application, or browser plugin, so not really much that can be done there. It might be possible to batch a number of requests to the proxy, and also have the proxy cache requests, but I’d prefer to find solutions to needing the proxy cache in the first place!

The W3C Cross-Origin Resource Working Draft, provides a mechanism which allows holes to be punched in the same-origin restricting. If URL shortening services allowed their resources to be shared cross-origin, by implementing the Access-Control-Allow-Origin http header, the need for the proxying mechanism would go away.

Finally, I would propose that the XMLHttpRequest API be updated to enable a mechanism to avoid following redirects. The W3C working draft notes that a Property to disable following redirects; is being considered for future version of the specification. I would be in favour of this.

Unfortuantely the conclusion needs to be that with the current API limitations and security models, it is not possible to write this kind of application in a pure client-side manner with current web technologies.

Updated blog

Tue, 28 Apr 2009 14:13:43 +0000
meta

I’ve updated my blog software so that you can now browse older blog entries, and at the same time I’ve added tags, so you can now browse all my entries by tag. If you are only interested in Android (for example), you can browse all my Android articles.

Currently the RSS feed still has everything, if anyone wants my RSS feed to support the same thing, please let me know, and I’ll add that too.

pexif 0.13 release

Thu, 23 Apr 2009 11:19:06 +0000
tech code pexif python

pexif is the python library for editing an image’s EXIF data. Somewhat embarrassingly, the last release I made (0.12) had a really stupid bug in it. This has now been rectified, and a new version (0.13) is now available.

Super OKL4

Sat, 21 Feb 2009 11:39:48 +0000
tech okl4

Cool! Someone in Japan seems to be porting OKL4 to the SuperH architecture.

OLPC Library

Wed, 21 Jan 2009 07:57:40 +0000
tech olpc

A couple of months ago I posted about my OLPC XO laptop, and how I was looking to give it away to someone who can make good use of it.

As it happens, I’m not the only one in this position, and John came up with the idea of creating an OLPC library that would enable developers to loan out a laptop. So I will be donating my laptop to the new OLPC library.

linux.conf.au 2009

Sun, 18 Jan 2009 08:32:02 +0000
tech lca

I’m currently sitting in Melbourne airport waiting for my plan to Hobart. I’m heading off for the best conference in the world, where I’ll be running the Open Mobile Miniconf. There is an awesome set of talks (which I’ll be blogging about shortly).

urllib2 and web applications

Mon, 05 Jan 2009 18:17:52 +0000
python tech article web

For a little personal project I’ve been working on recenly, I needed to create some unit tests for a CGI script, sorry, that should be, a cloud computing application. Of course, I turned to my trusty hammer, Python, and was a little disappointed to discover it didn’t quite have the batteries I needed included.

I kind of thought that urllib2 would more or less do what I wanted out of the box, but unfortunately it didn’t and (shock, horror!) I actually needed to write some code myself! The first problem I ran into is that urllib2 only supports GET and POST out of the box. HTTP is constrained enough in the verbs it provides, so I really do want access to things like PUT, DELETE and HEAD.

The other problem I ran into, is that I did’t want things automatically redirecting (although clearly this would be the normal use-case), because I wanted to check I got a redirect in certain cases.

The final problem that I had is that only status code 200 is treated as success, and other 2xx codes raise exceptions. This is generally not what you want, since 201, is a perfectly valid return code, indicating that new resource was created.

So, urllib2 is billed as An extensible library for opening URLs using a variety of protocols, surely I can just extend it to do what I want? Well, it turns out that I can, but it seemed to be harder than I was expecting. Not because the final code I needed to write was difficult or involved, but because it was quite difficult to work out what the right code to write is. I want to explore for a little bit why (I think) this might be the case.

urllib2 is quite nice, because simple things (fetch a URL, follow redirects, POST data), are very easy to do:

ret = urllib2.urlopen("http://some_url/") data = ret.read()

And, it is definitely possible to do more complex things, but (at least for me), there is a sharp discontinuity in the API which means that learning the easy API doesn’t help you learn the more complex API, and also the documentation (at least as far as I read it), doesn’t make it apparent that there are kind of two modes of operation.

The completely frustrating thing is that the documentation in the source file is much better than the online documentation! Since it talks about some of the things that happen in the module, which are otherwise “magic”.

For example, the build_opener function is pretty magical, since it does a bunch of introspection, and ends up either adding a handler or replacing a handler depending on the class. This is explained in the code as: if one of the argument is a subclass of the default handler, the argument will be installed instead of the default. , which to me makes a lot of sense, where-as the online documentation describes it as: Instances of the following classes will be in front of the handlers, unless the handlers contain them, instances of them or subclasses of them: ....<list of default handlers>. For me the former is much clearer than the latter!

Anyway, here is the code I ended up with:

opener = urllib2.OpenerDirector()
opener.add_handler(urllib2.HTTPHandler())

def restopen(url, method=None, headers=None, data=None):
    if headers is None:
        headers = {}
    if method is None:
        if data is None:
            method = "GET"
        else:
            method = "POST"
    return opener.open(HttpRestRequest(url, method=method, 
                                       headers=headers, data=data))

So, conclusions, if the dos don’t make sense, read the code. If you are writing an API, try to make it easier to get from the easy case to the more complex case. (This is quite difficult to do, and I have definitely been guilty in the past of falling into the same trap in APIs I have provided!). If you can’t get the API to easily transition from easy to hard, make sure you document it well. Finally, Python is great language for accessing services over HTTP, even if it does require a little bit of work to get the framework in place.

IMAP passwords in Android

Sun, 04 Jan 2009 08:47:54 +0000
tech android article

I’ve been setting up my new Android phone and found out that the password handling code in the IMAP client doesn’t work so well with my Dovecot IMAP server.

Now, I really can’t be bothered working out which side is doing it wrong, but Dovecot expects your password to be contained in double-quotes if it contains special character. (No, I don’t precisely know what those characters are!). And, of course, if you are double-quoing the string, you then need to escape and double-quotes in the password itself. Now, like I said, no idea of this is the correct (according to the standard) behaviour, but it is the behaviour that I have to deal with, since I can’t mess with the server. Of course, the Android IMAP client doesn’t do this escaping, and so you get an error message indicating your username / password is incorrect. Frustratingly, the error message doesn’t pass on the information from the server giving detail on exactly what the problem is so you are left guessing.

Anyway, it turns out if you manually escape the password that you give to the Android email client things work fine. Of course, the SMTP server doesn’t need this escaping, and fails if you do have it, so you need to type in a different, unescaped, password for SMTP. Fun and games!

Looking at the lastest source code for Android, it looks like this has been properly fixed, so hopefully and upgrade to the Cupcake branch in the near future will solve the problem.

Debugging embedded code Using GDB and the Skyeye simulator

Fri, 02 Jan 2009 08:55:48 +0000
tech article gdb

This post is basically a crash course in using gdb to run and debug low-level code. Learning to us a debugger effectively can be much more powerful than ghetto printf() and putc() debugging. I should point out that I am far from a power-gdb user, and am usually much more comfortable with printf() and putc(), so this is very much a beginners guide, written by a newbie. With those caveats in mind, lets get started.

So the first thing to do is to get our target up and running. For this our target will be a virtual device running with Skyeye. When you start up Skyeye and pass it the -d flag, e.g: $ skeye -c config.cfg -d. This will halt the virtual processor and provide an opportunity to attach the debugger. The debugger will be available on a UNIX socket. It defaults to port 12345. Of course a decent JTAG adapter should be able to give you the same type of thing with real hardware.

Now, you run GDB: $ arm-elf-gdb. Once gdb is running you need to attach to the target. To do this we use: (gdb) target remote :12345. Now you can start the code running with (gdb) continue.

Now, just running the code isn’t very useful, you can do that already. If you are debugging you probably want to step through the code. You do this with the step command. You can step through code line at-a-time, or instruction at-a-time. At the earliest stages you probably want to use the si command to step through instruction at-a-time.

To see what you code is doing you probably want to be able to display information. For low-level start up code, being able to inspect the register and memory state is import. You can look at the register using the info registers command, which prints out all the general-purpose registers as well as the program counter and status registers.

For examing memory the x command is invaluable. The examine command takes a memory address as an argument (actually, it can be a general expression that quates to a memory address). The command has some optional arguments. You can choose the number of units to display, the format to display memory in (hex (x), decimal (d), binary (t), character (c), instruction (i), string(s), etc), and also the unit size (byte (b), halfword (h), word (w)). So, for example to display the first five words in memory as hex we can do: (gdb) x /5x 0x0. If we want to see the values of individual bytes as decimal we could do: (gdb) x /20bd 0x0. Another common example is to display the next 5 instructions, which can be done with (gdb) x /5i $pc. The $pc expression returns the value in the pc register.

Poking at bits and bytes and stepping instruction at a time is great for low-level code, but gdb can end up being a lot more useful if it knows a little bit more about the source code you are debugging. If you have compiled the source code with the -g option, your ELF file should have the debugging information you need embedded in it. You can let gdb know about this file by using the (gdb) symbol program.elf. Now that you actually have symbols and debugging information, you can do things like normal then step command, and it will step through lines of source code (rather than instructions).

The other nice thing you have is that you can easily set breakpoint and watchpoints. (You don’t have to have source debugging enabled for this, but it makes things a lot easier!). Seting a breakpoint is easy, you can set it on a line e.g: (gdb) break file.c:37, or on a particular function e.g: (gdb) break schedule. Breakpoints are neat, but watchpoints are even cooler, since you can test for a specific conditions e.g: (gdb) watch mask < 2000.

Now that you have these nice watchpoints and breakpoints, you probably find that most of the time, you just end up printing out some variables each time you hit the point. To avoid this repetitive typing you can use the display command. Each expression you install with the display command will be printed each time program execution stops (e.g: you hit a break-point or watch-point). This avoids a lot of tedious typing!

So, this is of course just scratching the surface. One final thing to consider that will likely make your time using gdb more useful and less painful (i.e: less repetitive typing), is the define command which lets you create simple little command scripts. The other is that when you start gdb you can pass a command script with the -x. So, you might want to consider, instead of littering your code with printf() statements everywhere you might want to write some gdb commands that enable a breakpoint and display some the relevant data.

Good luck, and happy debugging!

The trouble with literal pools

Fri, 02 Jan 2009 06:43:37 +0000
article tech arm

Yesterday we saw how with some careful programming, and the right compiler flags we could get GCC to generate optimally small code for our particular function. In this post we take a look at one of the ways in which GCC fails to produce optimal code. Now there are actually many ways, but the I want to concentrate on in this post is the use of literal pools.

So, what is a literal pool? I’m glad you asked. The literal pool is an area of memory (in the text segment), which is used to store constants. These constants could be plain numerical constants, but their main use is to store the address of variables in the system. These addresses are needed because the ARM instruction does not have any instructions for directly loading (or storing) an address in memory. Instead ldr and str can only store at a ±12-bit offset from a register. Now there are lots of ways you could generate code with this restriction, for example, you could ensure your data section is less than 8KiB in size, and reserve a register to be used as a base for all data lookups. But such approach only works if you have a limited data section size. The standard approach that is taken is that when a variable is used its address is written out into a literal pool. The compiler then generates two instructions, first to read the address from this literal pool, and the second is the instruction to access the variable.

So, how exactly does this literal pool work? Well, so that a special register is not needed to point at the literal pool, the compiler uses the program counter (PC) register as the base register. The generated codes looks something like: ldr r3, [pc, #28]. That codes loads a value at a 28-byte offset from the current value of PC into the register r3. r3 then contains the address of the variable we want to access, and can be used like: ldr r1, [r3, #0], which loads the value of the variable (rather than the address) into r1. Now, as the PC is used as the base for the literal pool access, it should be clear that the literal pool is stored close enough to the code that needs to use it.

To ensure that the literal pool is close enough to the code using it, the compiler stores a literal pool at the end of each function. This approach works pretty well (unless you have a 4KiB+ function, which would be silly anyway), but can be a bit of a waste.

To illustrate the problem, consider this (contrived) example code:

static unsigned int value;

unsigned int
get_value(void)
{
    return value;
}

void
set_value(unsigned int x)
{
    value = x;
}

Now, while this example is contrived, the pattern involved exhibits itself in a lot of real-world code. You have some private data in a compilation unit (value), and then you have a set of accessor (get_value) and mutator (set_value) functions that operate on the private data. Usually the functions would be more complex than in our example, and usually there would be more than two. So lets have a look at the generated code:

00000000 <get_value>:
get_value():
   0:	4b01      	ldr	r3, [pc, #4]	(8 <get_value+0x8>)
   2:	6818      	ldr	r0, [r3, #0]
   4:	4770      	bx	lr
   6:	46c0      	nop			(mov r8, r8)
   8:	00000000 	.word	0x00000000
			8: R_ARM_ABS32	.bss

0000000c <set_value>:
set_value():
   c:	4b01      	ldr	r3, [pc, #4]	(14 <set_value+0x8>)
   e:	6018      	str	r0, [r3, #0]
  10:	4770      	bx	lr
  12:	46c0      	nop			(mov r8, r8)
  14:	00000000 	.word	0x00000000
			14: R_ARM_ABS32	.bss

You can see that each function has a literal pool (at address 0x8 and 0x14). You can also see that there is a relocation associated with each of these addresses (R_ARM_ABS32 .bss). This relocation means that at link time the address of value will be stored at locations 0x8 and 0x14. So, what is the big deal here? Well, there are two problems. First, we have two literal pools containing duplicate data, by storing the address of value twice, we are wasting 4 bytes (remember from yesterday, we have a very tight memory budget and we care where every byte goes). The second problem, is that we need to insert a nop in the code (at address 0x6 and 0x12), because the literal pool must be aligned.

So, how could the compiler be smarter? Well, if instead of generating a literal pool for each individual function it did it for the whole compilation unit, then instead of having lots of little literal pools with duplicated data through-out, we would have a single literal pool for the whole file. As a bonus, you would only need alignment once as well! Obviously if the compilation unit ends up being larger than 4KiB then you have a problem, but in this case you could still save up producing the literal pool until after 4KiB worth of code. As it turns out the commercial compiler from ARM, RVCT, does exactly this. So lets have a look at the code it generates:

00000000 <get_value>:
get_value():
   0:	4802      	ldr	r0, [pc, #8]	(c <set_value+0x6>)
   2:	6800      	ldr	r0, [r0, #0]
   4:	4770      	bx	lr

00000006 <set_value>:
set_value():
   6:	4901      	ldr	r1, [pc, #4]	(c <set_value+0x6>)
   8:	6008      	str	r0, [r1, #0]
   a:	4770      	bx	lr
   c:	00000000 	.word	0x00000000
			c: R_ARM_ABS32	.data$0

You see that the code is more or less the same, but there is just one literal pool right at the end of the file, and no extra nops are needed for alignment. Without merging literal pools we have a .text size of 24 bytes, with the merging we slash that down to 16 bytes.

So merging literal pools is pretty good, but the frustrating thing is that in this example, we don’t even need the literal pool!. If we examine the final compiled image for this program:

Disassembly of section ER_RO:

00008000 <get_value>:
get_value():
    8000:	4802      	ldr	r0, [pc, #8]	(800c <set_value+0x6>)
    8002:	6800      	ldr	r0, [r0, #0]
    8004:	4770      	bx	lr

00008006 <set_value>:
set_value():
    8006:	4901      	ldr	r1, [pc, #4]	(800c <set_value+0x6>)
    8008:	6008      	str	r0, [r1, #0]
    800a:	4770      	bx	lr
    800c:	00008010 	.word	0x00008010
Disassembly of section ER_RW:

00008010 <value>:
    8010:	00000000 	.word	0x00000000

You should notice that the actual location of the value variable is 0x8010. At address 0x800c we have the literal pool storing the address of a variable which is in the very next word! If we optimised this by hand, we would end up with something like (need to verify the offset):

Disassembly of section ER_RO:

00008000 <get_value>:
get_value():
    8000:	4802      	ldr	r0, [pc, #4]	(8008 <set_value+0x8>)
    8002:	4770      	bx	lr

00008004 <set_value>:
set_value():
    8004:	6008      	str	r0, [pc, #0]
    8006:	4770      	bx	lr
Disassembly of section ER_RW:

00008008 <value>:
    8008:	00000000 	.word	0x00000000

If we get rid of the literal pool entirely, we save the memory of the literal pool itself (4 bytes), plus the two instructions need to load values out of the literal pool (4 bytes). This cuts our text size down to a total of only 8 bytes! This is a factor 3 improvement over the GCC generated code. Granted, you are not always going to be able to perform this type of optimisation, but when you care about size, it is quite important. It would be nice if gcc supported a small data section concept so that you could specify variables that essentially resised within the literal pool instead of needing an expensive (in terms of space and time) indirection.

For this project, it looks like the code will have to be hand tweaked assembler, which is frustrating, because when you use a person as your compiler iterations become really expensive, and you want to make sure you get your design right up front.