In his memoirs —which sometimes resemble a fairy tale missing the dark side— Richard Feynman tells the story of how he fixed a radio just by thinking. In the old days of radios with transistor tubes, when Feynman was still a kid, he earned pocket money fixing radios. One particular radio was making loud scratchy noises when turned on, but functioned normally once the tubes were warm after a few minutes. As he paced up and down in front of the radio, trying to figure out what the problem was and how to fix it, the man who drove him there started mocking him, asking why he's not fixing the radio instead of just walking around and thinking. The thinking did help, though, and Feynman came up with a possible explanation for the defect. The circuit properties of transistor tubes change with temperature. Since the radio was working with warmed-up tubes, the tubes were not faulty. When they were cold, however, they might have had properties which led to a feedback loop that magnified simple background noise to much higher volumes. A possible solution would be to change the order of the transistors which have the same specifications, because this would break the feedback loop, but leave the warmed-up circuit intact. When he tried this solution, it actually worked, to the surprised delight of the man making fun of him. In his eyes, Feynman became the boy who could fix radios by thinking. How could Feynman do this? He had general knowledge of the inner workings that enabled him to come up with possible solutions. The radio wasn't a complete black box for him. Although he did not know the exact intricate details, what he knew about the way circuitry is built and how tubes behave was enough to reason about electronic circuitry and come up with an easy fix —a hack— to a problem.
We don't have people who fix radios anymore, because (a) radio devices have become so tightly integrated and difficult to fix that we discard them and buy new ones when they are broken and (b) we don't use radios anymore, because many special-purpose electronic devices have been turned into software. The stored-program general-purpose computer, by definition the digital machine that can imitate all other digital machines, has encroached on all domains that can be turned into digital information processing. We are all users of software now. This new state of affairs, the prevalence of computational tools, led to campaigns to get everybody on the code train, and battle cries of man the keyboards, learn to program. Such sweeping generalizations led to the predictable backlash, with a common sentiment among programmers put to digital ink by the popular coder-blogger Jeff Atwood in a recent post in which he pled the general population not to learn to program. Building software is a very specific job, comparable to plumbing, according to Atwood. The fact that you're using software, and want to use it better, does not mean that you should learn how to build it.
The plumber analogy, as it underlines the silliness of learning to do something just because we all use it, obscures what I mentioned about software: that it is a ubiquitous and greedy tool. We are damned to live with software, and the amount of time we spend with it is constantly increasing. It is seeping into nearly every corner of our lives. I recently took a course for a sport boat license. We found the course online, the instructor used software during the course to present material, software was course material (GPS and digital maps), and the recommended learning material was a (rather ugly and buggy) piece of software. When a technology becomes such a prevalent part of our lives, having a rudimentary understanding of it turns from a convenience to a condition of consciously living in your times. The technology itself becomes the topic, the color of the glasses through which we look at the world, the hammer the society uses to hit whatever resembles a nail. Strangely, the public understanding of how software works, gets made, and breaks is derived from half-baked blog posts like this one, short sequences of television testimony, and whatever the programmer friend tells over lunch. Giving people the mental tools to think and talk about software does not require making a programmer out of everyone, however — just as teaching the way western democracy in its current state works does not require making a consitutional scholar out of everyone. What our fellow citizens need to learn is to reason about software; when the situation calls for it, they should be able to think computationally.
And with software, the situation frequently calls for it. Being everywhere and shape-shifting is all good and well, but software is not always the nice chap that deserves all of the access it gets. The one thing that you can generally say about all software is that it breaks. Even the most meticulously written piece of software is full of bugs waiting to annoy the user, and most software for the average consumer is not even meticulously written. If you know how to think about computers and software, though, it becomes easier to navigate failures. Instead of falling into an annoyed and frustrated half-paralysis, you can reason about the failure, and come up with alternatives. Programs are information processing tools that present a domain in a structured manner, allowing you to engage with the material at hand intuitively. A good tool should get out of the way, making its own presence forgotten, presenting itself as an extension of human capabilities. Faulty software is the antithesis of a good tool. Like a hammer whose head keeps falling off, it becomes a nuisance, and replaces the topic you wanted to work on as the center of attention. If you can reason about software, working with it becomes more like tool use again, and concentrating on the material is back within reach.
I think the way modern software looks and behaves is also complicit in the general frustration caused by it. Software artefacts resemble physical ones to enable faster recognition and ease of use; this is called skeumorphic design. The illusion that there is something natural about things on screen is just too easy to fall into. The feeling that the contents of a document really scroll up or down because they are on something like paper bound to a bar is unavoidable, because the software itself actively tries to achieve this. But, and this but would be big part of a computational thinking course were I to design the curriculum, there is nothing natural about software: All of the interactions have to be built to behave that way, and keep on working together when more functionality is tacked on. That nice animation when you resize a window, the one that looks so smooth and organic? Someone wrote code that calculates how the window should appear with each movement of the mouse. The highlighting of the text when you mark it with a cursor, a display mode second nature to anyone using computers? There is code for that. And because of that, it could break.
Another, and not less important reason why a course on computational thinking deserves to become a part of basic knowledge is a political one. Ownership of digital information and technology, and the right to make decisions on matters digital (like what constitutes a crime, what should cost money, or what is patentable/copyrightable) is becoming more and more relevant to financial and political power. As with all matters that become politically relevant, it is very important that the common social understanding of such a matter is fraud-safe, in that someone would have less reason to talk nonsense to further his own agenda. When that senator from Alaska compared the Internet to tubes, and argued that they could get congested if too much 'stuff' is dumped into them, he became the target of sustained mockery, probably causing future politicians to be careful what they say the next time they talk about what the Internet is. But similar nonsense claims are being made all the time about personal information, about patentability of algorithms, what consitutes ownership of information etc., partly because the Internet is still relatively young and software not an integral part of our lives for so long, but also because people do not know about the most simple aspects of software. For most of them, software is some ethereal stuff in a black box. Oracle could recently claim that the way software is organized —particularly the Java API— can be copyrighted, and an algorithm for carrying out a mathematical calculation can be patented. The judge actually went ahead to learn some Java, and to point out that Oracle's claims are plain ridiculous: Given a problem, arriving at a similar solution is what is to be expected in programming, and you can't ask someone for money because he wrote a similar algorithm in the same programming language. Wouldn't it be awesome if this were obvious to most of the population, and we wouldn't have to deal with the current insane patent situation?
What would be some other topics for Computational Thinking 101? One would be how modern computers have built on progressive abstractions, moving from machine code to programming languages through compilers, and from software running in batches on bare metal to common ways of organizing a computer's capabilities as an operating system. How the complexity of software increases exponentially with more features, not linearly. That not all programming languages are equal, and there is something resembling a political divide between different conceptions of how to state an algorithm. How data is organized into files that are just bytes, and how this data becomes meaningful when it is interpreted. How programs are files just like any other, which just get interpreted in a special way. How networks work, and how you can join networks to ever larger units. And most importantly, at each step, how all this is prone to dysfunction, because software is built by humans in large groups, on certain assumptions and under time pressure in.