Pages: pp. 2-4
For the past couple of weeks, my head has been full of "intelligence." Not only do we have this special issue on human-level intelligence, Forbes approached me on the subject of AI and I've been organizing the first Future Challenge: Intelligent Vehicles and Beyond competition, all while chairing the 2009 IEEE Intelligent Vehicles Symposium in Xi'an, China's ancient capital.
Such an intensive "onslaught" of intelligence has forced me to ruminate on these topics: what is human intelligence, what is artificial intelligence, and what is the future of intelligence after all? The more I think about these questions, the fewer answers I have. I'd like to enlist our community for a collective response. Hopefully, this will also help in our selection for the AI's 10 to Watch list next year.
For many people, AI's ultimate goal is to create computers or machines so advanced that they possess intelligence equal to humankind's. To guarantee the uncompromised realization of this grand vision, Alan Turing devised his famous Turing test six years before the official birth of AI to verify a machine's ability to demonstrate intelligence.
Of course, the stream of strong doubts and objections to such a view of AI has never ceased, and the roller coaster of constant upheaval in AI research offers little remedy to the situation. AI researchers are more interested in developing concrete intelligent functions for specific computing tasks, and pay little attention to the Turing test, which has more to do with AI's philosophy than its practice.
On the other end of the spectrum, ardent believers are much more optimistic and aggressive about passing the Turing test. In 1990, futurist Raymond Kurzweil predicted that Turing test-capable computers would be available around 2020, and then revised his estimate to the year 2029 in his $20,000 Long Bet Project with Mitch Kapor. I'm hopeful for Kurzweil, especially with the unavoidable human tendency toward anthropomorphic fallacy that Michael Shermer points out—a true Turing test might not be that hard to pass after all. More recently, Kurzweil asserted that "the singularity is near" in his book of the same name: around 2045, we'll enter an era in which "our intelligence will become increasingly non-biological and trillions of times more powerful than it is today—the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity." His prediction implies that AI will soon advance beyond the Turing test and surpass human intelligence as we know it.
I have deep doubts about such claims because the inclusion of such specific dates makes a meaningful philosophical discussion difficult, not to mention the lack of technical guidance regarding AI's further development. Rather than focus on either extreme, I'd like to take a new path outside the current debate: it might be time to separate AI from human intelligence for a while, and consider it as an independent form of intelligence—that is, a type of real intelligence within cyberspace.
As we find Web 2.0 and all such forms of X 2.0 mushrooming everywhere, cyberspace—the virtual world—might soon become as real to human beings as physical space. As I mentioned in my previous column, using the mathematic concept of complex numbers as an analogy (complex numbers consist of half real and half imaginary numbers) our future world could become a "complex space," composed of half physical and half cyber spaces—that is,
Complex Space = Physical Space (50%) + Cyberspace (50%).
It's taken more than 200 years for us to realize that imaginary numbers aren't imaginary after all; they're as real as real numbers. This time around, I hope it will take far less time to accept that cyberspace is as equally real and important as our natural world.
To live effectively in complex spaces, we'll need a new form of intelligence or augmented intelligence — the complex intelligence. My postulate is simply that
Complex Intelligence = Human Intelligence (50%) + Artificial Intelligence (50%).
We can consider this type of AI as an extension of current AI. According to Karl Popper's theory of reality, the universe includes three interacting worlds: World 1, the physical world; World 2, the mental world; and World 3, the artificial world, the home to abstract objects such as theories, stories, myths, tools, social institutions, and works of art. Cyberspace is a materialization or reflection of World 3. Traditional human intelligence is a connection between Worlds 1 and 2, whereas AI will be its counterpart connection between Worlds 2 and 3.
This isn't just my instinctive reaction to Kurzweil's theory of singularity or Irving John Good's predication of an "intelligence explosion," it's also my hurried answer to questions such as "Is society computable?" or "Can we model a culture?"
I borrowed the heading above from the title of a book by late Nobel Laureate Herbert A. Simon, one of AI's founders. Clearly, the rapid, dynamic development of cyberspace, partic-ularly its speed, scale, and huge amount of information, has imposed urgent and tremendous demands for such a science and the corresponding AI technology needed. If we consider the artificial in AI real and free of certain conventional laws in the physical world, we can open a door for such a new science and offer AI strength in constructing intelligent systems for better, more effective cyberphysical interactions in complex spaces.
For example, we can incorporate various "artificial laws," such as Merton's, Moore's, and Metcalfe's into AI and intelligent systems to investigate and affect the dynamics of cyberphysical interaction. We could move even further to build "parallel universes" in the sense of third- or fourth-level parallelism along the lines of the many-worlds interpretation of quantum mechanics. By extension, societies or cultures are then, of course, computable. This will further widen the roads or open the doors for existing and new fields in AI research, such as be-havio-ral computing and psychological computing.
We can consider British philosopher and mathematician Bertrand Russell a strong supporter of such computable cultures. He stated, "A great many things we thought were natural laws are really human conventions," and, "The whole idea that natural laws imply a lawgiver is due to a confusion between natural and human laws." As he further stated, his arguments have made "this whole business of natural law much less impressive than it formerly was." I hope his arguments can also help justify AI's liberation from its association with human intelligence, at least in cyberspace.
I don't yet have a clear picture of what a complex intelligence could mean, but I do have a strong sense that we must respect and think of AI as an independent form of machine intelligence. Some of the original ideas from Marvin Minsky's Society of Mind (Simon and Schuster, 1988) should prove useful in the future direction of AI. But in the end, we should take an evolutionary approach and enjoy the process by which new intelligence for cyberspace, cyberphysical space, or complex space will emerge. Hopefully in the near future, reverse AI tests, such as CAPTCHAs ( Completely Automated Public Turing Test to Tell Computers and Humans Apart), although still primitive, will prove useful to differentiate new AI from human intelligence.
Do we need a new definition or a new name for such intelligence? Intelligence 2.0? Web or cyber intelligence? Computational intelligence? I'm comfortable with the old name, artificial intelligence. But no longer truly artificial—AI is real. Our future intelligence will be half human and half artificial—as I've said before, no more, no less, just half each.
Finally, I wish the best to Patrick Hayes ( Figure 1) and express my sincere appreciation for his eight years of great service as a co-editor of the Human-Centered Computing department in IS. Thank you Pat, your contributions helped further the discussion, understanding, and research of our future intelligence.
Figure 1 Recognizing Patrick Hayes. Coeditors Ken Ford (left) and Robert Hoffman (right) present a certifi cate of appreciation on behalf of IEEE Intelligent Systems to Patrick Hayes for eight years of service as a volunteer coeditor of the Human-Centered Computing department.
I just wanted to point out a big omission in the story about the DC snipers and the white van. If Coplink's search found it, then that does not reflect well on association rule mining. The story is that this van was a spurious association that misled the police for a long time. The snipers were riding some old black sedan. Because white vans are so common, it was easy to find such an association.
Thank you for such a good example of a flaw of association rule mining.
Hsinchun Chen responds:
We appreciate the letter sent by Myriam Abraham and apologize for the misquote in the IEEE Intelligent Systems In the News article "Data Mining for Crooks." The article stated, "The van was spotted in many gas stations during the DC sniper investigation. Association rule mining allowed the specific van to rise to the top of crime scene associations." But it was a suspicious dark blue Chevrolet Caprice that was identified via Coplink's association rule mining. The Caprice was identified after combining and analyzing police databases from several jurisdictions that were linked through Coplink during the sniper investigation.