I should have read this book earlier. When the whole TDD thing exploded, the web was, for a while, plastered with blog posts and tutorials on the whys and hows of TDD and unit tests. Everyone agreed that automated testing was a great thing; the opinions differed only on when to write the tests, and what the units under test should be. I thought I had understood and soaked in the main tenets of TDD, and did not need to read this relatively small book.
So, what is TDD? It’s a development method where you have to follow a very simple algorithm, but to the letter. Start with some code, and a set of tests that are green, meaning they compile and the assertions are all correct. No code and no tests count as green. Write a test that fails - this is red. The failure can be a failing assertion or a compiler error. The test that fails is free to contain the perfect interface that is not implemented; in this sense, TDD can also be called interface-first programming, but I’m probably not the first one to make that point. In the next step, make the failure go away by changing code. Once you have a working test, refactor to make duplication go away. This is the famous red-green-refactor loop. There are many nuances to this loop, of course. For example, how to reduce duplication is left to the developer, but it’s obvious that you shouldn’t just be copying all input/output pairs in the test and functional code. In case you were asking yourself why the TDD way of working is so strictly codified, Kent Beck answers that question on p. 202: “by reducing repeatable behavior to rules, applying the rules becomes rote and mechanical…When along comes an exception, or a problem that just doesn’t fit any of the rules, you have more time and energy to generate and apply creativity”.
Not every kind of test is suitable for TDD. The tests have to be explicitly unit tests. They have to be fast, so that you can run them as often as you want, preferably after each save to make sure you haven’t broken anything. They have to be independent from each other, so that one failing doesn’t jeopardize the others; essentially, you should be able to run them in a random order. They should be orthogonal, i.e. test different aspects of whatever is being tested, so that when an aspect of the code under test changes, you don’t have to adapt multiple tests. Unit tests should also not rely on environment conditions that could change (network, files), or services (database). In sum, a test suite (the sum of the unit tests for a module) should give you quick, precise feedback, reliably.
Why do this? Why not just sit down and bang code out? One could just as well apply the Feynman method in physics to programming, which consists of the following steps: Write down the problem, think, write down the solution. Proving that working in the TDD way is worth it is the whole point of this book. The main reason a developer should be doing TDD is that it’s good for him; that is, it’s about the psychology of development. Unit tests give you the support to work without stress, and try out new ideas all the time. You can write the smallest test possible, solve it in the most simple way imaginable (i.e. return a constant), and then move on the refactoring with the knowledge that the test is there. Since the unit tests are fast, you can just try something out, and quickly see what is failing. One ‘aha’ moment happened for me when Kent Beck went on a few pages with some extremely simple logic (multiplying currency values), improving first the test then the code. At the end of what I thought was an inane display, he pointed out that “TDD is not about taking teeny-tiny steps, it’s about being able to take teeny-tiny steps” (p. 9). You can ridicule TDD all you want for allowing too small steps, but this shows that TDD allows you to take steps of arbitrary size, which helps immensely when you are in a difficult place. Or as stated on p. 42, “TDD is a steering process. There is no right step size, now and forever”.
Another really important effect of unit tests which I have also observed in my daily work, and can be used as a lever for productivity, is concentrating on what’s next. Even if you don’t follow TDD, having exactly one failing test, and trying to get it to work, while noting down all distractions to deal with later, is a great boon for productivity. When there is no such lens to focus the attention, developers are prone to code for hours without finishing anything particular, working on first this one feature, and then on some other. Keeping to the one failing test helps avoid such meandering.
And then there is the topic of design. One of the most disputed arguments of TDD has been that it takes care of design, at least to a certain extent. Design is handled in this book in the context of refactoring and design patterns. Kent Beck argues that the original design patterns book has a bias towards design as a phase, and not as a part of refactoring, which I would tend to agree. In TDD, on the other hand, design happens mostly in the refactoring that aims to remove duplication. By keeping the tests green while improving design, the developer can “Code for tomorrow, design for today” (p. 195). This is one place where I think Kent Beck is kind of cheating. Here is where he is right: As stated on p. 203, TDD shortens the feedback on design decisions. When you decide to change the interface of a unit, and see how many tests would fail in which way, the decision is much easier to make then if the tests were missing. However, I think Beck is overlooking the fact that in order to get design right with TDD, you need to be able to recognize good design when you see it, and this is not a small thing. TDD makes it easier to navigate the space of possible designs and interfaces. This fact is attested by another interesting observation made by the author, that “Doing a refactoring based on a couple of early uses, then having to undo it soon after is fairly common” (p. 102). Recognizing design that has to be undone, however, is a part of software wisdom, and requires long years of experience, and cannot be really simply achieved only with TDD.
One aspect where TDD helps immensely with design, however, is in terms of cohesion and coupling. Cohesion refers to how focused a class is on the task it should perform, whereas coupling is a measure of how dependent various classes are on each other. When the above points on good unit tests are observed (independence, orthogonality), they point naturally to a breakdown of the domain in terms of classes with high cohesion and low coupling. Or in Kent Beck’s words, “I never knew exactly how to achieve high cohesion and loose coupling regularly until I started writing isolated tests” (p. 125).
There are a couple of things that disturbed me about the book, though. I was surprised at the weird snark in some places. On page 68, for example, Beck writes “the methods have to be public because methods in interfaces have to be public (for some excellent reason, I’m sure)”. It’s pretty obvious why, actually. An interface is by definition a record of what is open to the rest of the system. Why would one want to check for private methods in an interface declaration? That’s a matter of implementation, and internal business of the class. Also, the writing is sometimes affectedly indirect and humorous. There are two paragpraphs on p. 124 telling an anecdote, and for the life of me, I can’t understand either what happens or what it’s supposed to mean. Due to this affected tone in some places, I could not discern whether some advice (e.g. on p. 138) was meant as humor or not. In this particular case (“If you don’t know what to type, then take a shower, and stay in the shower until you know what to type”) if it wasn’t humour, it was rather shallow advice, too.
No matter what you make of TDD, this book sets the record straight on what it actually means, and why you should practice it. If you care about quality software or testing, you should definitely read it.