Super OKL4

Sat, 21 Feb 2009 11:39:48 +0000
tech okl4

Cool! Someone in Japan seems to be porting OKL4 to the SuperH architecture.

What makes code trustworthy?

Mon, 08 Sep 2008 09:22:52 +0000
tech okl4 trust article

So last week I posted on the difference between trusted and trustworthy code. Picking up that thread again, if there is some code in my system that is trusted how can I tell if it is actually trustworthy?

Now ruling out truly malicious code, our assessment of the trustworthiness of code, really comes down to an assessment of the quality of the code, and a pretty reasonable proxy for code quality is the number of errors in the code, or more specifically, the lack thereof. So the question is how can we determine whether a piece of code has a small number of bugs?

The number of errors in a body of code is going to be the product of two important factors: the defect density and the size of the code. So, in general the defect density is measured in bugs per thousand lines of code (KLOC), and the size of the code is measured in lines of code. Now there are plenty of arguments on what the ”right“ method is for measuring lines of code, and in general you can only know the exact defect density after all the bugs are found, and of course, program testing can be used to show the presence of bugs, but never to show their absence!*. So, to lower the number of bugs that exist in a code base there are really two options: reduce the number of lines of code, or improve the defect density.

So, how do we improve, that is reduce, the defect density. Well, there are a number of pretty well known ways. Effective testing, despite its caveats, goes a long way to reducing the number of bugs in a code base (assuming that you actually fix the bugs you find!). Static analysis (in its various forms), is also an important tool to help increase the quality of code, and is often a great complement to testing as it can expose bugs in code that is impractical to test. And of course there are other formal methods like model checking which can help eliminate bugs from the design phase. A great example of this is the SPIN model checker. Code reviews, while time intensive, are also a great way of finding bugs that would otherwise fly under the radar. Another way to improve code quality is to write simple, rather than complex code. McCabe’s cyclomatic complexity measure can be one good indicator of this. There is, of course, just a sampling of some of the aspects of software quality. Wikipedia and McConnell’s Code Complete for more information.

Now, how do you know if some code base has actually undergone testing, how do you know if static analysis has been performed on the source code? How do you know if the code has under gone a thorough code review? Well, generally there are two ways, you trust the developer of that code, or you get someone you do trust to do the quality analysis (which may be yourself). Now, is the point where things quickly shift from technological solutions into the fuzzy world of economics, social science, psychology and legal theory as we try to determine the trustworthiness of another entity, be it person or corporation. The one technologically relevant part is that it is much more difficult to do a 3rd party analysis of code quality without access to the source code. Note: this I am not saying that open source software is more trustworthy, simply that making the source available enables 3rd party assessments of code quality, which may make it easier for some people to trust the code.

So, improving code quality, and thereby reducing defect density, is one side of the equation, but even if you have a very low defect density, for example, less than 1/KLOC, you can still have a large number of bugs if your code base is large. So it is very important to reduce the size of the code base as well. A small code base doesn’t just have the direct benefit of reducing the code size part of the equation, it also helps improve the defect density part of the equation. Why? Well, almost all the techniques mentioned above are more effective or tractable on a small code base. You can usually get much better code coverage, and even a reasonable amount of path coverage, with a smaller code base. Code reviews can be more comprehensive. Now to some extent those techniques can work for large code bases, just through more programmers at it, but using static analysis is another matter. Many of the algorithms and techniques involved with static analysis have polynomial, or even exponential computational complexity with n based on number of lines of code. So an analysis that may take an hour on a 10,000 line code base, could end up taking all week to run on a code base of 100,000 lines of code.

Of course, this doesn’t address the problem of how you assure that the code you think is in your TCB is really what you think it is. That topic really gets us into trusted boot, trusted protection module, code signing and so on, which I’m not going to try and address in this post.

Now, it should be very clear that if you want to be able to trust your trusted computing base, then it is going to need to be both small and high quality.

Trusted vs. Trustworthy

Tue, 02 Sep 2008 13:01:05 +0000
tech article okl4 trust

If you’ve seen me give a presentation recently, or just been talking about some of the stuff I’ve been doing recently, you’ve probably heard me mention the term trusted computing base or TCB. (Not to be confused with thread control blocks, the other TCB in operating systems). So what is the trusted computing base?

The TCB for a given system is all the components, both hardware and software, that we must be relied upon to operate correctly if the security of the system is to be maintained. In other words, an error that occurs in the TCB can affect the overall system security, while an error outside the TCB can not affect the overall system security.

Now, the TCB depends on the scope of the system and the defined security policy. For example, if we are talking about a UNIX operating system, and its applications, then the trusted computing base contains at least the operating system kernel, and probably any system daemons and setuid programs. As the kernel enforces the security mechanism of process boundaries, it should be obvious that an error in the kernel can affect the overall system security. Traditionally on UNIX, there is a user, root, who is all powerful, and can change the system security policies, so an error in any piece of software that runs with root privileges also forms part of the trusted computing base. Of course, any applications are outside the trusted computing base. An error in a database server should not affect the overall system security.

Of course, if we are using a UNIX operating system as the foundation of a database server, then the definition of the TCB changes. In this case not only is the operating system part of the TCB, but the database server is as well. This is because the database server is enforcing the security of which users can access which rows, tables and columns in the database, so an error in the database server can clearly impact the security of the system.

OK, so we now know we have to trust all the code that falls inside the TCB if we want to put any trust into our security system. The problem is, just because we have to trust this code does not give us any rational reason to believe that we can trust this code. Just because code is trusted doesn’t give us any indication at all as to whether the code is, in fact, trustworthy.

To put any faith in the security of the system we should ensure that any trusted code is trustworthy code.

There are a number of things that we can do to increase our confidence in the trustworthiness of our code, which I will explore in coming posts. For more information on the trusted computing base, the Wikipedia page gives a good overview, and links to some useful papers.

Video - Porting OKL4 to a new SoC

Thu, 29 May 2008 18:45:49 +0000
tech article code okl4

Earlier this year I presented at the linux.conf.au embedded miniconf, about how to port OKL4 to a new SoC. The video was taped and had until recently been available on the linux.conf.au 2008 website, but for some reason that website has gone awol, so I thought it was a good time to put up my own copy. These videos have the advantage that they have gone through a painstaking post-production phase, which seamlessly meld the slides into the video (well, not quite seamless), and also all the bad bits have been removed.

This presentation gives a really good overview of what is involved in porting OKL4 to a new SoC. However, please note that the specific APIs have been somewhat simplified for pedagogical reasons, so this is more an introduction to the concepts, rather than a tutorial as such.

The videos are available in Ogg/Theora and also Quicktime/H264 formats, and either CIF (352x288) or PAL (720x576). If you can afford the bandwidth I would recommend hi-res ones, as then you can actually see what is one the screen.

Are microkernels hardware abstraction layers?

Mon, 25 Feb 2008 16:58:27 +0000
tech article microkernel okl4

In a recent post Gernot made a comparison between nanokernels and hardware abstraction layers (HALs). This prompted a question on the OKL4 developers mailing list: well, couldn’t you consider a microkernel a HAL?.

I think the logical conclusion, both theoretical and practical, is a resounding, no.

Why? Well, a microkernel is, in theory (if not always in practise) minimal. That is, it should only include things in the kernel are those pieces of code that must run in privileged mode.

So, if a microkernel was to provide any hardware abstractions, it will only be providing the abstraction that have to be in the kernel. Which really falls short of a complete hardware abstraction layer.

Venn diagramm shoing overlap between HAL and microkernel properties

Now, probably the more interesting question is should the microkernel provide any hardware abstraction, and if so what hardware should it be abstracting, and what is the right abstraction. After starting to write some answers to these questions I reminded myself of the complexity involved in answering them, so I will leave these questions hanging for another post.

OKL4 wins iAward

Thu, 31 May 2007 17:16:20 +0000
tech okl4

Last night Open Kernel Labs won an iAward in the Application and Infrastructure Tools category.

It is quite cool that I can now say our microkernel is now award winning! I think a lot of this is really due the awesome engineers working here.