The Community for Technology Leaders

From the Editor in Chief


Pages: pp. 2-3


Traveling up to London on the train, I watched the hostess struggling with her buffet trolley replete with coffee and tea dispensers, hot and cold snacks—the usual fare. Passengers crammed into narrow seats and absurdly narrow aisles, with barely any clearance for the trolley. Why are British trains built this way? It all comes down to Victorian rivalry. The rail revolution's great architects were the Stephensons (father George and son Robert) and Isambard Kingdom Brunel. George Stephenson was a practical engineer with no formal education who based his designs on the things he found around him. These included the horse-drawn trucks and tramways that predated the steam train. The gauge they ran on was 4 feet 8-1/2 inches. Such trucks were strong enough to carry coal—they would do for people.

Brunel liked to reason from first principals. He rode the Stephensons' trains and became convinced that a smoother, faster ride would result from a broader gauge. His Great Western Railway from Bristol to London ran on 7 feet 1/4 inch gauge. It was more expensive to build, but he demonstrated its superiority. He commissioned his friend Charles Babbage to prove the case. Babbage built a vibration test rig that rode in the passenger carriages and showed through miles of recording paper that Brunel's trains ran more smoothly. Nevertheless, the Gauge Act of 1846 decreed that all new track would conform to the narrower standard. There was simply more of the Stephensons' gauge laid, and it was cheaper. This standard is found in most European countries and was adopted by President Lincoln in the US.


The world around us is the result of such episodes and events. Of course, once we start asking "what if"—searching out other possible lines of evolution—we open a Pandora's box. Our very existence is, according to many scientists, the result of an accumulation of improbable events. Had there not been the Permian extinction that wiped out 90 percent of life, and had there not been the Jurassic asteroid that did the dinosaurs in, we wouldn't be here at all.

Such counterfactuals are an excellent way to work through the dependencies and contingencies of our world. In the excellent What If the Moon Didn't Exist? Voyages to Earths That Might Have Been (Harper Collins, 1993), Neil Comins invites us to consider just how different things would be if our satellite hadn't been blasted out of the early Earth. Our days would be eight hours long, winds would howl at 100 mph, and complex life might not have emerged.

We don't have to look to such cosmic catastrophes to see very different worlds. There's a small industry of authors writing about the different futures we could be living in if the past had been different—futures that appear to hinge on a pivotal event. If the Greeks hadn't beaten the Persians at Marathon in 490 BC, would Western Civilization have emerged? If in 1066 at the Battle of Hastings, King Harold had defeated William the Conqueror, how different would things look? What if George Washington hadn't escaped the British on Long Island in the early dawn of 29 August 1776? What if Nikita Khrushchev hadn't backed down during the 1962 Cuban Missile Crisis?

These previous examples are rooted in the political and military arena. But this is no less true in science and technology. My example of the rail standard can be extended without end. What if the Greeks had exploited the steam engine that Heron invented in the first century AD? The Chinese were using block printing for at least 100 years before the earliest-dated European example, from 868 AD—what if large-scale printing had become commonplace over a thousand years ago? If Humphy Davy had bent his mind to it, we might have been living with the electric light shortly after he made the first incandescent electric arc in 1800. And what if Babbage's Analytical Engine had been finished and Ada Lovelace had lived to an old age? In The Difference Engine (Bantam Books, 1992), science fiction writers William Gibson and Bruce Sterling imagine a world in which the computer revolution happens a hundred years earlier—with Charles Babbage's Analytical Engines providing all the advantages and problems with which we're now grappling.


Here's the tension—we know things could have been different, but generally we imagine our present as the only way things can be. But there's nothing inevitable or preordained about where we are today. So, could AI have been different? Take something as familiar as the programming languages we "grew up with." Most of us have used Lisp, but it could have been Fortran! Many of us probably have been asked, "Why can't we do it in Fortran?" Well, John McCarthy was once quoted as saying that if Fortran had supported recursion, he would have used its variant FLPL (Fortran List Processing Language), a language developed by Herbert Gelertner and Carl Gerberich. Lisp might never have existed.

So many alternative histories for programming languages are possible that you hardly know where to start. However, as with all what-ifs, some things seem inevitable—some ideas just must win through. For example, eval, the feature that McCarthy added to Lisp to enable a function to be defined within a running program, is too powerful, too useful not to feature in programming languages. Or take the rise of object-oriented programming—wasn't this destined to occur once Alan Kay designed the first version of Smalltalk back in 1972?

Much about the construction of artifacts and languages, or devices and explanations, is subject to the vagaries of history. Perhaps it's different where the truth is somehow to be discovered—whether it's in graph theory or complexity, or in tractability or proof. But deep philosophical issues exist here, too—what we seek to prove and understand is also a function of our interests and goals at a particular time. Was it inevitable that fundamental concepts such as circumscription, the frame problem, and situational calculus should have emerged the way they did, when they did? That man John McCarthy again.

Other what-ifs are more subtle. What if the Turing Test hadn't become the touchstone for early AI? What if English weren't the de facto language of modern computational linguistics? What if Marvin Minsky and Seymour Papert hadn't published Perceptrons (MIT Press, 1969), and neural network research had continued apace during the following couple of decades? What if the WWW project had been killed off?

Other possible histories in computer science have to do with the establishment's failure to appreciate or value developments occurring before its eyes. Ivan Sutherland in 1962 at MIT's Lincoln Labs had already built Sketchpad—it ran on a TX-2, one of the most powerful computers of its time. Sketchpad was years ahead of its time in terms of the CAD and interactive graphics on display. Quite aside from a lack of computing power to put Sketchpad in the hands of engineers, there was no widespread understanding of what CAD might achieve. In 1968, Doug Engelbart was showing the future of personal computing—direct manipulation, hypertext systems, and collaborative work—but corporate business computing saw no future in it. What if Xerox had marketed the Alto computer they had developed in 1973, with its menus, icons, pointers, and mice?

If history is the history of great men and women … we can certainly wonder how things might have been different if Alan Turing hadn't taken his life in 1954 or if David Marr hadn't died of leukemia 24 years ago. What if Fred Brooks hadn't headed up the IBM 360 effort, hadn't insisted on a common instruction set across a family of machines, or hadn't pushed the concept of intelligence amplification in his work with scientists? What if John McCarthy hadn't had the conversation with Marvin Minsky and Jerome Wiesner that led to the MIT AI Lab being founded in 1959? What if McCarthy had never moved to Stanford and founded the AI Lab in 1963? What if John McCarthy's parents had never met! But this is true for all of us: whatever our contributions, great and small, the world would be different without them.


The what-ifs canvassed here aren't independent—human and social, scientific and technical, commercial and economic, and political and military all interconnect. AI, more than most subjects, also needs to look forward to imagine possible futures. It needs to do this because it could have such a large impact on so much around us. Looking into possible futures and reflecting on alternative pasts help us understand the present. I'd certainly be interested in your what-ifs of AI and computer science—send your ideas to



63 ms
(Ver 3.x)