Issue No.02 - March/April (2007 vol.22)
Published by the IEEE Computer Society
Daniel E. Cooke , Texas Tech University
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MIS.2007.23
This article reviews Thinking about Android Epistemology, edited by Kenneth M. Ford, Clark Glymour, and Patrick J. Hayes.
Thinking about Android Epistemology, Kenneth M. Ford, Clark Glymour, and Patrick J. Hayes, eds., AAAI Press, 2006, ISBN 0-262-06184-8, 384 pp., US$30.00.
Thinking about Android Epistemology, by Kenneth M. Ford, Clark Glymour, and Patrick J. Hayes, extends the collection of papers that constituted their 1995 book, Android Epistemology (AAAI Press). The new volume includes papers by the editors and other major AI researchers, including Marvin Minsky, Herb Simon, Anatol Rapoport, Douglas Lenat, and Daniel Dennett. To varying degrees, each paper is coherent and self-contained. However, readers will benefit from reading the collection from start to finish.
Although literature on science and philosophy reaches back many centuries, this book suggests that 20th-century mathematical advances (especially those of Turing and Gödel) combined with theoretical and empirical advances in AI have revealed exciting new territories for thought and experiment. Given that much of philosophy concerns our understanding of intelligence, the scientific exploration of AI arguably provides the most natural binding of science and philosophy.
The book introduces you to several landmark turning points in AI's development as a field. For example, Gödel's incompleteness theorems don't make AI impossible any more than they prevent human intelligence. If a human can't reach a conclusion because of incompleteness, why would we expect a machine to do better? And if a machine does exhibit intelligence, we can't claim that it's unintelligent simply because we believe in some vague quality of the "human essence" of intelligence. In other words, assuming that we know how to detect intelligence, if a machine shows all signs of intelligence, we must conclude that it's indeed intelligent.
This brings you to the book's running commentary on the Turing test and some of its subtle but important deficiencies. For example, the book argues that AI's theoretical and empirical environment shows that humans can't detect intelligent inferences in their own thinking, much less in their observations concerning another person's intelligence level. The book neatly explains these internal inferences in the section on the frame problem, which relates change with inference in two important ways. If I'm on a train and the train moves, I also move. However, the train's movement doesn't infer change in my basic appearance, my bank account, or countless other things.
These frame problem inferences seem to constitute the human thinker's intellectual reflexes. Are these reflexes what some call the human essence? If so, would solutions to the frame problem give machines the human essence? Thinking about Android Epistemology neatly interleaves the philosophical questions concerning research in artificial and human intelligence in an entertaining and thought-provoking manner. The book's climax draws the reader to the paradox of self-reference, which lies at the heart of much AI research. The efforts to build a thinking machine are bound to involve self-reference and self-reflection. The consequent paradoxes are a tantalizing boundary that naturally engage thinkers and scientists. Do we try to make machines that somehow think like we do? Or do we make machines that think the way machines should think? The authors point out that airplanes don't have feathers and don't flap their wings, and human brains probably don't compute Fourier transforms when they recognize an old friend's face. With observations such as these, Thinking about Android Epistemology has few low points and serves as an excellent overview of the ultimate conjunction of science and philosophy.
Daniel E. Cooke is a professor of computer science at Texas Tech University. Contact him at email@example.com.