OK I GIVE UP

Systemantics: How Systems Work and Especially How They Fail

Book cover image

Things are not working, that much we agree about. For me, these ‘things’ are mostly software systems which drive me closer to the edge of insanity the more I work with them, and for the author of this book it’s mostly human organizations. The arguments of the book are supposed to apply to all systems, so we can ignore this minor difference. Who or what is to blame for this state of affairs? Or even better, how can we navigate it? The culprit, according to the author, is the systems that we see everywhere, enjoined and entangled, permeating our lives, and pulling us in all directions. The solution proposed by the author is to be suspicious of systems, and know their tricky ways (“We must learn to live with systems, to control them lest they control us”, p. xiii). An extensive definition of a system is not attempted by the author, rather wisely, because this gives the reader the drive to look for systems in their own life. There is one roundabout way of defining systems towards the end of the book which I found rather enlightening, however: “The system represents someone’s solution to a problem. It does not solve the problem” (p. 74). More than systems per se, the author targets what he calls systemism, the vacuous belief in the fundamental usefulness of systems, and the book aims to cast it out of the reader by succinct statements of the flaws of systems. The book is written in a tongue-in-cheek style. The author makes claims of systematicity, and orders his argument in axioms that should somehow lead up to a consistent theory, but that’s not really the case. This doesn’t mean that the book is inconsistent, however. The axioms serve to deconstruct systemism, and show, in the process, how systems not only fail to function as advertised, but also blind us to their failure.

The argument starts off with a statement of the singularity of systems: All systems exhibit system behavior, and when we have a new one, we will have to deal with its problems, just like with all the others. Systems also have a will to survive and grow. Based on work on the size of administrative systems, the author (rather informally) judges the annual growth of systems to be in the order of 5 to 6 percent per year. This despite the fact that systems do not produce the results we expect them. In fact, we cannot even project what kind of behavior complex systems will exhibit. This is due to what the author calls the Generalized Uncertainty Principle (GUP): “Complex systems exhibit unexpected behavior” (p. 19). The simple and straightforward explanation for the GUP is that the no one can understand the real world good enough to predict the outcome of complex processes. This is a fact that has been internalized in parts of the startup community with the principle of experimenting and failing fast to find out what is working, instead of just guessing it. One obvious trick to beat the GUP is taking a simple system that is well understood, and making it bigger. This will not work, however, which I have had the chance to observe personally a number of times, and the author agrees: “A large system, produced by expanding the dimensions of a smaller system, does not behave like the smaller system”.

The next axiom, called Le Chatelier’s principle, is also something I have had the misfortune of observing: “Systems tend to oppose their own proper functions” (p.23). That is, systems, through ill-defined proposals for improvement, will install procedures which in name aim to do so, but bind people and resources in what the author calls Administrative Encirclement. The example the author gives is from an academic setting, but every software developer knows the dread of having to attend meetings that are supposed to improve efficiency, but acoomplish nothing other than holding back people from working.

The result of the GUP and Le Chatelier’s principle is that systems do not perform the function a similar system of smaller size would perform. The relationship is in the opposite direction: The system is not defined by the function, but the function by the system: “The function or product is defined by the systems-operations that occur in its performance or manufacture” (p.36). But if systems are inefficient, and producing the wrong thing at the same time, how come they don’t self-correct? To do so, a system should have the capacity to perceive the situation, which is not the case. For systems, “The real world is what is reported to the system” (p. 39). I think this is one of the most striking lessons of this book. It is astonishing how distorted the views of people working within a company (especially in higher positions) can become, and this precludes making any significant changes in the company. This is also one of the most significant parallels between human and software systems. A software system is also as good as its fidelity to the real world. Some systems keep on running for a long time as their records of reality and the real world drift apart. The author also has a really nice name for the ratio of reality that reaches administration to the reality impinging on the system: Coefficient of Friction.

What about intervening into systems as an outsider? Can one judge their behavior from the outside, without the tinted glasses with which the system sees the world, and improve the system? According to the author, this is possible only if the system already worked at some point. One cannot build a complex system and expect it to work; this is not possible. A working complex system can be achieved only by starting with a small system, and growing it. This is another parallel to software development: Large systems that are designed without one line of code being written, and are then built by separate teams, face huge problems when the time for integration comes. This insight has led to the agile movement, which aims for always working software that is integrated as frequently as possible. What’s more, software teams also face a similar issue. Gathering a large number of developers together, and telling them to build a specific thing does not work either. The best approach is to start with a relatively small team, see what works and what doesn’t, establish a culture, and grow around them.

How systems deal with errors (or fail to do so) is one of the most relevant parts of the book for modern technological systems. Due to the fact that systems tend to grow and encroach, they will have an infinite number of ways in which they can fail. These ways, and the crucial variables which control failure, can be discovered only once the system is in operation, since the functionality of a complex system cannot be deduced from its part, but only observed during actual functioning. These points lead to the conclusion that “Any large system is going to be operating most of the time in failure mode” (p. 61). It is therefore crucial to know, and not delegate as only extraordinary, what a system does when it fails. This is not so easy, however, since as per the coefficient of fiction, it is difficult for a system to perceive that it is working in error mode, which leads to the principle that “In complex systems, malfunction and even total nonfunction may not be detectable for long periods, if ever” (p. 55).

If it is so difficult to design systems that work, and keep them working as they grow, how are we supposed to live with them? The obvious step is to avoid introducing new ones, that is, “Do it without a system if you can” (p. 68), because any new system will bring its own problems. If you definitely have to use a system, though, the trick is to design the system with human tendencies rather than against them. In technological systems, this is understood as usability. Another principle is to design systems that are not too tightly coupled in the name of efficiency or correctness. This is stated as “Loose systems last longer and function better” (p. 71).

After reading the book, and writing this review, I have only one question in my mind: Why does this book exist? How can it exist? How can it be that so many mind-blowing insights about technological systems were derived by a MD, and recorded in an obscure 80 page book sometime in the seventies? And which other books exist out there that are as good as this one, and are not yet discovered?