Pages: pp. 2-4
As you no doubt know by now, 2006 is the 50th anniversary of the Dartmouth summer workshop that was, if not the birth of modern AI, then certainly the party celebrating that birth. Of course, machine intelligence workshops had already taken place in the US and UK, and Alan Turing had proposed his famous "imitation game," now called the Turing Test, in a 1950 paper. 1 However, the 1956 summer school brought together the field's leading researchers, along with a small number of bright students interested in learning more about this newly emerging "artificial intelligence" thing.
As editor in chief, I was initially tempted to create a volume, as several other AI magazines and journals have, that would look back at 50 years of AI and ruminate on where we've been. However, the more I thought about this issue and the stories I'd heard of the field's early days, the more I started thinking about how exciting it must have been before anyone had talked of "AI winter" or, as one AAAI Spring Symposium was so foolishly titled, "What Went Wrong and Why?" Rather, the field focused on an exciting journey into a bright, unknown future. Working with primitive computers now surpassed by a microwave oven's microprocessor, these daring scientists dreamed of solving one of the most enduring scientific problems: What is intelligence, and what might it mean that we have it?
But wait—how exciting it should be now! After 50 years of exploring the field of AI with ever-more-powerful computers, we've learned so much more than we knew then. We've learned of problems whose complexity boggles the mind, where "intractable" is better than the oh-so-dreaded, and way-too-often-occurring, undecidability result. Cognition turns out to be harder than we ever dreamed. Surpassing human capabilities at even simple games such as chess isn't as simple as it first seemed, and we're still not even close to mastering hard games, whether exponential nightmares such as Go or unsolvable puzzles such as interactive strategy games. Despite the computer's incredible power and the awesome information space that's the Web, search engine technology remains primitive, and real understanding of human language seems as far away as ever. In short, we remain poised on the edge of an exciting journey into a bright and unknown future!
Given the unsolved problems and our tendency to forget how exciting the quest is to build the intelligent machine, IEEE Intelligent Systems has chosen to devote this special issue to the Future of AI. May we look forward to the next 50 years having as many successes and surprises.
To explore our field's future, I invited a number of well-known AI scientists to contribute articles speculating about where AI is headed and how we might get there. The response was wonderful, as I think you will agree.
To start with, I solicited a set of longer articles that would represent a wide variety of AI research.
Dartmouth workshop attendee Oliver Selfridge, one of the first machine learning researchers, has worked in both academia and industry and has been an advisor to many US government agencies. When I first asked Oliver to write an article for a special issue on the Dartmouth 50th, he was hesitant—until he learned that I was asking him to write about the future, not the past. Oliver has always been a visionary, and he shares with us his continuing vision of machine learning's future—summarizing traps into which researchers often fall and describing challenges that remain.
Edwina Rissland, a University of Massachusetts professor and one of the founders of case-based reasoning and the area of AI and the law, also rises to the challenge of exploring an area of AI with a long history. In her article, she argues that research in similarity-driven reasoning—for example, reasoning by analogy—has accomplished much in the past 50 years but still has a long way to go. She explores past approaches and outlines many unsolved problems in that domain.
Raj Reddy is a former dean of Carnegie Mellon University's School of Computer Science and a recipient of the 1994 Turing Award for his seminal work in large-scale applied AI. His article explores how intelligent systems research could have a major, beneficial impact on society as computer power continues to increase and as AI and robotics stride forward. Raj explores the many ways in which people are deploying robotic research, noting that "Such capabilities can be used to further increase the gap between the haves and have-nots, or to help the poor, the sick, and the illiterate." He challenges us to take the right path as AI moves into the future.
This theme also appears in the article by Austin Tate, who holds a chair in knowledge-based systems at the University of Edinburgh and is a fellow of the Royal Society of Edinburgh, Scotland's National Academy. (Austin is also a member of the IEEE Intelligent Systems Advisory Board and has served as an associate editor in chief for a number of years.) He argues that we can use AI technologies other than robotics, especially intelligent cooperating agents, to create a "helpful environment." To illustrate this, he gives examples from several projects, including disaster relief and emergency response and rescue.
Representing a different view of creating agents in the future is an article by Luc Steels, cofounder and former chairman of the Vrije Universiteit Brussel's Computer Science Department and one of Europe's leading proponents of what some call "nouveau AI." Luc explores semiotic dynamics, in which groups of agents, human or machine, collectively invent and negotiate shared symbol systems that they then can use for detailed communication. He argues that human language is best understood as a complex, dynamic system shaped by the evolution of communication. To illustrate this concept, he provides examples from robotics, agents, and learning research. Luc believes that this new approach to exploring language challenges the traditional view of language as the competence of an idealized speaker. He posits that this approach might completely change how we view and build human-to-computer (and human-to-human) communication systems.
Jordan Pollack, a Brandeis University computer science professor, takes such an approach even further. Jordan was an author of a 2000 Nature paper describing robots with a locomotion system that evolved from simple electromechanical systems, rather than being designed by humans. That paper generated huge interest and spawned a great deal of artificial-life research. In his article in this issue, Jordan explores the principles that enable such evolutionary results, arguing that traditional AI might be based on a misapprehension about what it is to be intelligent. In an analogy sure to be painful to many in the AI community, he wonders whether traditional symbolic AI is arguing that intelligence is too complex to have evolved without some sort of "intelligent designer" involved in the loop.
In addition to these six articles, this issue features several shorter articles by members of our advisory and editorial boards and by leading researchers in various AI subareas. I can't describe them all here, because at the time I'm writing this we've received too much material to fit in a single issue and are working out which articles will appear in this issue and which will appear in future issues.
Several of our regular departments in this issue also offer perspectives on the future. For example, the Semantic Web department features an article by the University of Southampton's Nigel Shadbolt (a former editor in chief of this magazine); MIT's Tim Berners-Lee, who invented the World Wide Web; and Wendy Hall, the head of Southampton's School of Electronics and Computer Science and one of the best-known computer scientists in Britain. They remind us of the notion of the Semantic Web as the "web of data" and explore what that could mean and how we can achieve it, providing a guiding vision for this important new AI research area.
In previous issues of this magazine, I invited you to submit your own articles for this issue. A few readers took us up on this. One of these papers appears in this issue: "AI and Science's Lost Realm," by Colin Hale, a University of Melbourne graduate student who has returned to get his PhD after almost 20 years in industry. Self-described on his Web site as "a mature age student following the trail towards fun," Colin challenges some traditional views of AI and science, even exploring whether "metaphysics" might be a better way of approaching some of the field's problems.
Somewhere in this issue is sure to be an article you disagree with. We can't wait to receive your letters or short articles expressing your opinions on the field's future. We'll reserve space in forthcoming issues for these pieces, and I hope you will join in the fun by writing provocative pieces of your own.
It would be hard to find a triter phrase than "our children are our future," but in academia, as in our "real" lives, it remains true. The true innovations in the next 50 years will need to come from those who are starting their academic careers today. These are the researchers who will guide the field through the many changes sure to come as computers continue evolving, as we continue to explore human intelligence, and as we still strive to answer that primal question, "What is intelligence?"
To honor AI's future leaders, we're excited to include the AI's 10 to Watch department, which identifies some of the most promising young researchers. A competitive process, with nominations from leading AI researchers around the world, led to these researchers' selection. While we give them only one page each today, some of them might well end up writing the longer featured articles in our issue celebrating AI's 100th anniversary. I wish we could have included everyone who was nominated, but I hope you will agree that the recipients represent some of the best that AI has to offer.
Unfortunately, I'm forced to end on a sad note. Push Singh, a recipient of the AI's 10 to Watch award, passed away in February, not long after we notified him of the award but before he had a chance to claim his prize. So, we've replaced his page in the AI's 10 to Watch department with an In Memoriam article describing his work and what a special person he was. I hereby dedicate this special issue to his memory, a piece of the Future of AI that will be forever missing.
Send letters, including a reference to the article in question, to email@example.com. Letters will be edited for clarity and length.
If you're interested in submitting an article for publication, see our author guidelines at www.computer.org/intelligent/author.htm.
We're pleased to announce that Peter Norvig has joined our advisory board and Sean Luke has joined our editorial board. We also bid farewell to editorial board member Mark Swinson and wish him well in his endeavors.
Peter Norvig is Google's Director of Machine Learning, Search Quality, and Research. Previously, he was the senior computer scientist at NASA and head of the Ames Research Center's Computational Sciences Division. Before that he was the chief scientist at Junglee, the chief designer at Harlequin, and a senior scientist at Sun Microsystems Laboratories. He has also been a professor at the University of Southern California and a research faculty member at the University of California at Berkeley. He received his BS in applied mathematics from Brown University and his PhD in computer science from UC Berkeley. He's a fellow of the AAAI and coauthor of Artificial Intelligence: A Modern Approach. His publications center on AI, natural language processing, and software engineering, including Paradigms of AI Programming: Case Studies in Common Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX. Contact him at Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043; firstname.lastname@example.org; www.norvig.com.
Sean Luke is an associate professor in George Mason University's Department of Computer Science and is codirector of the university's Evolutionary Computation Laboratory. His research interests include evolutionary computation and stochastic search, machine learning of neural networks and finite-state automata, coevolution, multiagent simulation, and swarm robotics. He received his BS in computer science from Brigham Young University and his PhD in computer science (AI) from the University of Maryland at College Park. Sean serves on the boards of Evolutionary Computation and Genetic Programming and Evolvable Machines and is the author of the ECJ evolutionary computation system and the MASON multiagent-simulation toolkit. Contact him at the Dept. of Computer Science, George Mason Univ., MS# 4A5, 4400 University Dr., Fairfax, VA 22030; email@example.com; http://cs.gmu.edu/~sean.