Thoughts on Out of the Tar Pit

This popular and dense paper tries to extend Brooks’ work on difficulties for large-scale software systems; in particular on the relation complexity-state.

Here are some thoughts and reflections on some of the paper’s sections i found most interesting.

A small summary

In general, Out of the Tar Pit try to address the causes of the “software crisis”, in particular the complexity and state of large-scale software systems.

The first half of the paper focuses on complexity: causes, how we understand systems, analysis of the classical approaches to managing complexity, accidental/essential complexity and a recommended general approach.

The second half is a practical example of the paper’s recommendations, with the implementation of a hypothetical estate agency software.

I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
— C. A. R. Hoare

How do we understand software systems?

Note: This paragraph relates to section 3 Approaches to Understanding in the paper.

As complexity mines our ability to understand software systems — which turns out to be a major cause for software unreliability, security and performance issues — it is important to explore and reason about how we usually proceed when we do try to understand a system.

So, let’s imagine you were given a task to learn and understand a totally unknown codebase.

How would you do it? What would you do?

The paper suggests that there are mainly two possible methods: testing and informal reasoning.

Testing is about exploring the system from the outside. A black box: for such input that output.

Informal reasoning instead, is done from the inside — reading code.

According to the paper, of the two informal reasoning should be considered the most important. This is because reading code would allow access to extra information compared to testing and thus a deeper understanding of the system could be achieved.

This particular point is way more evident if we imagine to manually test a system in a specific scenario which is only a subset of all the possible states a system can be in.

The key problem [with testing] is that a test (of any kind) on a system or component that is in one particular state tells you nothing at all about the behaviour of that system or component when it happens to be in another state.
— Ben Moseley, Peter Marks

During testing, especially if all the other states are not obvious, we may be led to think that the limited subset we’re testing represents the whole set of possible states.

It would be only by reading the code that we would realise how wrong our assumptions were and how many states in reality the application handle.

That said, both the methods present some sort of limitations. For this reason, a combination of the two should be considered for a more prudent approach.

Why complexity?

Note: This paragraph relates to section 4 Causes of Complexity in the paper.

The principal causes of complexity are state, control and complexity itself which may lead to code inefficiencies of all sorts such as duplicate code, dead code, unnecessary abstraction, missed abstractions, and poor modularity.

In a few words, we could say that complexity breeds complexity.

State and complexity

State affects complexity as it affects testing and informal reasoning.

In regards to testing, higher the number of states, higher the number of tests to perform in the attempt to cover all the possible state combinations.

Higher the number of tests, the higher the risk surface we got to introduce a mistake or miss a case.

Informal reasoning is affected because of its very own essence. The developer has to build in his mind a representation of the system. Higher the number of states, higher the mental effort required to keep track of all the possible combinations.

As per testing, higher the number of states, higher the risk of forgetting or missing a case.

From complexity comes the difficulty of enumerating, much less understanding, all the possibles states of the program, and from that comes the unreliability.
— Frederick P. Brooks, Jr.

A small note on hidden states

As Brooks said, from complexity comes the difficulty of enumerating all the possible states. Sometimes this difficulty comes from having a state embedded in part of the systems.

These embedded implicit states can be hard to spot or look so innocent to pass unnoticed and their presence manifests only when something goes wrong.

These types of hidden states usually bring to runtime errors.

Some examples are optional variables or API calls.

Let’s assume we got a variable x of type string.

Now, as that variable is of type string, you could have in your code something similar to x.length === 0.

In this case, the states you would have are STRING_LENGTH_0 and STRING_LENGTH_GREATER_THAN_0.

But if you have the very same variable as optional you would have an extra state which is VARIABLE_NULL.

Having a variable as optional may appear innocuous but we passed from 2 possibles states and 1 type to 3 possibles states and 2 types.

Let the code grow and nasty stuff can happen.

A trivial but classic example of this is x.length === 0 with x as null and you got our old friend VM12455:1 Uncaught TypeError: Cannot read properties of null in the js world.

Control and complexity

Control is about the order in which things happen.

There are at least two cases in which control can affect complexity. One is when a specific order is not required to solve a problem, but the programming language forces an implicit order on the execution of the program.

This can create some difficulties with informal reasoning as the developer has to assume an implicit order and then strip it out.

The second, more serious issue is concurrency and asynchronicity.

This affects informal reasoning and testing.

Informal reasoning is affected because the developer will have to mentally branch all sorts of possibles scenarios; while for testing is even worst, as potentially for each iteration you could have different results with the same initial states.

How do we manage complexity?

Note: This paragraph relates to section 5 Classical approaches to managing complexity in the paper.

The paper assumes that part of the answer to this question relates to programming languages and how they deal with complexity.

For this reason, it takes into examination the major programming styles, which are, OOP, functional programming and logic programming and the properties they have.

Long story short, functional programming is the style that presents fewer drawbacks.

Most of all, functional programming by its inner nature avoids mutable states (and side-effects), which allow the systems to gain the property of referential transparency.

Loosely speaking, given a set of arguments a function will always return the same results, that is referential transparency.

Referential transparency should facilitate informal reasoning.

This is the opposite of what happens in OOP, where method invocation event with the same argument can return a different result each time, that is referential opacity.

referential transparency vs referential opacity

referential transparency
given a set of arguments, a function will always return the same results
referential opacity
given a set of arguments, a function may return different results

In addition, OOP has an inner property for which each object is unique, even if presents the same attributes as another object, this is called intensional identity.

This property is a violation of relational algebra and it heavily affects informal reasoning.

intensional identity vs extensional identity

intensional identity
each object is unique, even if presents the same attributes as another object.
eg: var a = {}; var b = {}; a === b; // false
extensional identity
things are considered the same if their attributes are the same eg: var a = 1; var b = 1; a === b; // true

The last programming style taken into consideration is logic programming. As a declarative style, logic programming should have the same potential of avoiding mutable states as functional programming does plus the ability to remove control complexity.

Unfortunately, according to the paper, no logic programming language has yet fully taken advantage of the ideas this style would allow.

declarative style vs imperative style

declarative style
specify what needs to be done
imperative style
specify how needs to be done

The recommendation

Note: This paragraph relates to section 6 Accidents and Essence and Recommended General Approach in the paper.

This section is roughly a third of the whole paper, for this reason only a brief summary will be presented.

Before diving into the recommendations we have to examine the concept of accidental and essential complexity as it sits at the core of the ideas proposed.

Essential complexity to the essence of the problem as seen by the user, while accidental complexity is all the rest (details and difficulties) a team will face dealing with the real world.

To clarify this point, consider some formal specification language like TLA+ or coq. The formal specification is the closest thing to essential complexity we got.

Instead, bits, bytes, caching, databases, backups and so on are part of the accidental complexity.

That said, the main recommendation circles around trying to avoid accidental complexity from state and control as much as possible.

The accidental complexity still present has to be separated from the rest of the system.

Functional programming style is chosen to avoid mutable states.


The content presented is a rapid overview of the first half of Out of the Tar Pit that hopefully gave you a sense of the paper and some of its key points.

The paper has to be read in full to be properly appreciated, but I personally found Approaches to Understanding, Causes of Complexity, Classical approaches to managing complexity, Accidents and Essence as the juicy sections of the paper.