Issue No. 04 - July-Aug. (2013 vol. 30)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MS.2013.89
Grady Booch , IBM
When I was just a kid—especially in my tween years—I discovered the joy of Taking Things Apart. As I reflect back, this was a major stage in my lifelong pursuit of understanding the world around me.
Clocks and radios trembled in my presence, for they knew that their days were numbered. This was the age in which vacuum tube radios were still in plentiful supply as they were quickly being replaced by more modern transistor radios. What else could one do to an obsolete radio? Why, take it apart, of course! Over time, I accumulated a large collection of capacitors, resistors, inductors, and various other detritus harvested from the rich pickings of many an old radio. To further my curiosity, I would visit our local library as often as I could, searching for the latest book on electronics that would help me understand what all the parts did and how I might bring them together in new, interesting ways, especially in ways that created sparks, smoke, or smells.
Clocks were another source of destructive joy. The great thing about mechanical clocks and watches is that they have so many moving parts. In particular, I remember a much loved pocket watch. While it worked, I would often remove its back and observe the action of its gears, marvelling at how they were so cunningly fashioned. I take responsibility for hastening its demise, thanks to my enthusiastic overwinding. But, to me, this was not a loss but a grand opportunity to take it apart so that I might understand its operation.
As a side note, perhaps as an act of unconscious penance and certainly in remembrance of these clocks past, I have a reproduction of a wooden clock from 1335 in my office. Its steady ticking amid the frenzy of the billions upon billions of transistors in all the computers that surround me offers a comforting contrast.
And that's the thing: If you take apart a software-intensive object such as a smartphone or a laptop, what do you see? Very, very little.
Recently, I had the pleasure of replacing the hard drive in my laptop. Again, I call it a pleasure, for it gave me the opportunity to open up the back of my MacBook and peer inside. To a geek such as me, it was a thing of wonder. Here was the CPU, there was the main memory, over in this area was all the logic for driving the display, all cunningly manufactured in ways to optimally balance the engineering forces on it: heat dissipation, interchangeability, electrical noise elimination, economy of space, function, cost, and so on.
From a hardware perspective, I could have easily explained to a nongeek what most of the components were and what they did. In fact, from time to time, I would take apart a computer or an old printer with one of the children in my life, showing them the joy of Taking Things Apart. From a hardware perspective, this was enough to begin to part the curtain on computing. Behind the glossy displays, switches, and sleek cases, it was possible to reveal the physical stage on which computing took place, at least at a high level of abstraction.
However, that's not nearly enough to explain the mystery behind the curtain. To begin with, most hardware is so tightly packaged that there is no "there" there. At least in the days of discrete logic, one could identify basic circuitry, but with modern packaging all of the interesting stuff is hidden behind dull, uniformly colored rectangles. Furthermore, unlike an old radio or a mechanical clock, there are few visible cues for how those parts work with one another.
But it's even worse than that. As we try to part this curtain, we realize it's far more complex and ineffable than we first thought. Hardware is the physical body that we bring to life by the breath of software, and, like air itself, there's nothing to see, nothing obvious to take apart, to help one understand how software works. Now, as someone behind the curtain, I know that large, software-intensive systems are things of beauty. Well-structured systems of several million lines of code have regularity—occasionally punctuated by exquisite chaos—that to me have incredibly appeal, especially knowing just how fragile and delicately balanced its parts must be.
Most of my nongeek friends think I am utterly mad. Now, this might be true along certain dimensions, but I'm convinced that opening the curtain on the mystery of computing is important. On the one hand, we are slowly surrendering our lives to computing, so it's good that we all understand at least the fundamentals of computing. On the other hand, no matter what future one might imagine, it relies on software not yet written so I'd like to help inspire the next generation of programmers to stand in stunned awe at the power and beauty of software in the hopes that they can responsibly contribute to its future.
Kick Starting Collaboration
A little over a year ago, we successfully completed a Kickstarter project for our transmedia project, Computing: The Human Experience. We've made considerable progress since then. The Kickstarter project did indeed radically kick-start our work, making it possible for us to collaborate with the Computer History Museum and KQED, Silicon Valley's public television station on developing our multipart documentary series. As part of that transmedia effort, we're just now starting the production of a series of short YouTube videos called Ask Grady, whose purpose is to explain some of this science behind computing. We've been inspired by the work of the Kahn Academy, and we're also encouraged by the work of code.org and Code for America, both of which seek to educate and focus a generation of programmers.
As an insider to computing, I've had to ask myself, what are the fundamentals that would be interesting to someone on the outside? What are the 10, 20, or 30 nuggets of information that collectively explain the mystery behind that matter? This is what we seek to cover in our series on online videos.
Let's start with some real basics, things that we accept without question. Why digital? In other words, why is modern computing based on discrete calculations rather than analog, which is how the world seems to work. And, while we're at it, why binary? What's so special about reducing everything to ones and zeros?
Next up the ladder of abstraction: How does Boolean logic make it possible to calculate? This can lead us down a hardware path: How does a CPU work? How does memory—of various types—work? How does information move within a computer or around the world? It can also lead us down the software path: What is an algorithm, and why it is important? What is a program? What does a system of systems look like (and why do they exist as they do)?
We can also go meta: What is a bug? How is software built (and why is it so hard to get right)? What is a programming language, and why do we have so many? Why do systems consist of many little languages in layers of abstraction?
There also some topics that touch on the philosophical: How is that the computing is universal? What can we not do with computers? What are the limits of what we can do?
Our primary audience for these videos is the intellectually curious—those who want to peek behind the curtain, who want to experience the joy of Taking Things That Cannot Be Seen Apart. Therein, we could use your help, you the readers of IEEE Software. Think of it this way: What are the fundamentals you think the general public should know about the matter of computing? What are basic scientific or engineering elements that the intellectually curious should know?
I'd like to hear from you; please email your ideas to email@example.com.
Grady Booch is an IBM Fellow and one of the UML's original authors. He's currently developing Computing: The Human Experience, a major transmedia project for public broadcast. Contact him at firstname.lastname@example.org.