The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January/February (2006 vol.21)
pp: 2-3
Published by the IEEE Computer Society
James Hendler , University of Maryland, College Park
ABSTRACT
It's time to look at human intelligence through the lens of what we've learned in the 50+ years of AI.




Intelligent Readers,
While listening to the radio the other day, I heard a commentator discussing some political issue. As part of his argument, he mentioned that just a short time before the Wright Brothers' precedent-setting flight at Kitty Hawk, the US Army had been cutting funding to flight research because of compelling arguments that human flight was clearly impossible. He didn't mention by name the person who had raised these arguments but did point out that in terms of traditional scientific merit, this person was much more respected than the two bicycle repairmen from Dayton, Ohio.
Boy, is that an understatement! The critic, Simon Newcomb, was one of the most famous scientists of his day. Among his many honors, he was the first president of the American Astronomical Society, the president of the American Association for the Advancement of Science, a member of the Royal Society, and the recipient of a gold medal from the Royal Astronomical Society. In short, when Newcomb said in 1901, "Flight by machines heavier than air is unpractical and insignificant, if not utterly impossible," people listened.
AI folks might recognize Newcomb's name. For several years, Pat Hayes and Ken Ford awarded the Simon Newcomb Prize to the critic of AI with "the silliest published argument against AI." In fact, Hayes gave a good invited talk about Newcomb, where he explained where Newcomb had gone wrong. In short, Newcomb's analysis was theoretically strong, but the Wright brothers, experimenting in a wind tunnel, discovered that various properties of airfoils provided more lift than Newcomb and others had thought possible. The Wrights' empirical work trounced the "received wisdom" of Newcomb's theoretical work.
I perked up when I heard mention of Newcomb and thought about Hayes and Ford's prize. There are a couple of reasons for this. One is that recently I've been wishing that the award were still around so that I could nominate Clay Shirky for his ridiculous arguments about the Semantic Web (but that's a topic for another editorial, another day). The other is that I've been thinking about AI and flight in the context of an article I'm writing for one of the various activities surrounding this year's 50th anniversary of the Dartmouth Conference.
Rethinking AI and human intelligence
In my Nov./Dec. 2005 editorial, I proposed that one problem in keeping up the funding level for AI was that "we've forgotten to take the time to explain to people the amazingly hard, and scientifically fascinating, challenges that remain ahead for our field." In thinking about extending that idea and taking up my own challenge, I realized that I was rethinking thoughts I had early in my AI career. However, as both the field and I have aged, my thinking about the relationship between AI and human intelligence has shifted.
Once upon a time I argued that the cognitive approach to AI was the crucial link to understanding how humans think, which I continue to believe is the critical scientific target that AI must reach. Our goal as a field isn't simply to improve computers' performance on seemingly hard problems, but to help us understand the nature of human intelligence. However, I've changed from thinking that the right way to approach this problem is solely to explore how humans do it, to wondering whether there might be principled definitions of the aspects of intelligence that we can use to guide our field.
I think I'm finally starting to understand something that Herb Simon taught. He argued that instead of looking for definitions of intelligence at a purely theoretical level, we would be much better off creating operational definitions that would let us place metrics against terms such as "creativity," "cognizance," and "comprehension," which we consider the hallmark of an intelligent creature. By doing this, he argued, we could see that computers were already able to show significant capabilities on many of these dimensions, and we could use these definitions to measure any progress we continue to make.
Birds, planes, brains, and AI
As more and more literature has reflected on AI at the time of this 50th anniversary, more and more of my peers have been arguing that AI has outgrown the pursuit of understanding intelligence and become an engineering field focusing more on computing applications. "Look," I've heard people say, "we didn't learn to build airplanes by studying how birds fly. Why should AI be held back by worrying about what people do?"
But that argument has a fatal flaw. While airplanes have gotten better and better in the century since the Wright's first flight, we still struggle to understand the details of bird wings, and there's not an aerospace engineer alive who wouldn't give anything to be able to build a device that was within an order of magnitude of a bat's maneuverability. However, the understanding of how birds and bats fly has been hugely advanced by the aerodynamic laws learned in the continued design of flying machines. Not only have we surpassed animals on some of the operational definitions of flight (particularly speed and distance), but we've also learned about nature's phenomenal capabilities by studying what we can't yet build.
The study of aerospace engineering and aerodynamics doesn't inherently conflict with an understanding of flight—on the contrary, it completely enhances it! Yet in AI, we seem to have lost the connection between our engineering triumphs and the natural phenomena we set out to explore. We must still explore different approaches to AI and understand how to use it to help solve many of the challenges facing our world. However, it's also time to look back at human intelligence through the lens of what we've learned in the 50+ years of AI.
The aerospace-engineering community can brag about the speed of our planes but isn't embarrassed to admit that we still can't outmaneuver a bird or swoop like a bat. Why, then, is AI often criticized for claiming to outperform humans in some area of thought, or harassed—as by those who have won the Simon Newcomb award—for not being able to do it in others?
Conclusion
As an AI scientist, I stand in awe of the human mind's creativity and cognizance. At the same time, I'm proud of the superhuman capabilities we've created to store and search vast arrays of data in ways no human can ever hope to without the aid of our machines. However, I question whether we've done due diligence in seeing whether the principles of the latter can inform our understanding of the former. And this, I'm sad to say, seems to me where modern AI is lacking. An embarrassingly small percent of the effort in our field seems to go into understanding what we've learned regarding the very scientific question we seek to explain.
It's time for the AI community to step back up to the plate of science and see if we can hypothesize about how we can use what we can do to understand more about what we can't. And if, as some critics charge, we still won't get it right, we can reply,
It is a fundamental principle of pure science that the liberty of making hypotheses is unlimited. It is not necessary that we shall prove the hypothesis to be a reality before we are allowed to make it.
Inspiring words from a brilliant scientist, Simon Newcomb ( Nature, 1894).




25 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool