The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (2008 vol.23)
pp: 2-4
Published by the IEEE Computer Society
James Hendler , Rensselaer Polytechnic Institute
ABSTRACT
Is artificial intelligence headed for another AI Winter? Not if we take action now.
Intelligent readers,
Funding in the US and EU for AI research is healthy, and throughout the world we're seeing increased interest in AI at a time when computer science, per se, is seeing decreased enrollments in undergraduate programs. Exciting AI results have been announced in the past year, and for what I believe is the first time, one of Science magazine's top 10 results of 2007 was a computing achievement—AI researcher Jonathan Schaeffer's solving of checkers. Even the business world, which hasn't always been as favorable to AI as it should be, seems to be using the term in a positive way. This is due to not just the consumer success of iRobot's Roomba but also significant successes related to other consumer products, financial institutions, medical applications, Web search, data mining, spam filtering, and a host of other areas. We're actually riding quite high at the moment!
So why, with all this excitement and success, do I find myself worried? Having lived through the events that led to the AI winter of the mid-'80s, I'm afraid we might be seeing early warning signs of a change in the weather. Let me note, before I launch into this, that I'm much more familiar with the AI scene in the US, and to some degree the EU, than in the rest of the world. I will thus be pointing at what might be only a trend in the US and Europe. However, I fear that this could lead, if you'll pardon the expression, to global AI climate change.
AI winter—the '80s
It takes a long time and a convergence of factors for an AI winter to occur. A facile explanation is sometimes used for the one we experienced in the 1980s. People say that "AI hype" caused that frost. As Wikipedia puts it at the time I write this article, "concerns are sometimes raised that a new AI winter could be triggered by any overly ambitious or unrealistic promise by prominent AI scientists." But many fields have been hyped without seeing the kind of winter we saw in the '80s—can you imagine a car company blaming bad sales on too much automobile hype? So, while overpromising might have been a factor, I don't think it really caused the weather change. Rather, a sort of "perfect storm" converged from three directions.
An often overlooked, but important, contribution to AI winter actually started nearly a decade earlier. DARPA, after a period of significant funding for US AI, changed management, and people who weren't hostile to AI, but also weren't friendly, moved in. The new management didn't kill off all AI funding overnight (although speech recognition took a huge hit by the late '70s). Rather, it felt that AI had had more than its fair share and that other fields, especially the emerging area of supercomputing, deserved a chance. So AI funding to universities decreased not because of a lack of results by AI researchers (in fact, the expert system wave was just starting to crest) but because of a perceived need to "spread the wealth."
Across the pond, the UK's Science Research Council asked James Lighthill, a leading British scientist, to write a review of the state of AI's progress. In 1973, his report came out highly critical of the field and concluded that "in no part of the field have discoveries made so far produced the major impact that was then promised." This led to a cutoff of AI research funding at all but two or three of the top British universities and created a bow-wave effect that led to funding cuts across Europe. (This also led to a DARPA follow-up report that echoed Lighthill's finding, leading to further US cuts in the late '70s.)
Funding cuts, however, are surprisingly blunt instruments when applied to a field, so it takes a while for the effects to be felt. Programs started in the late and mid-'70s were allowed to run their course, but at reduced levels, and students who had joined the field in the '70s continued their work. In fact, this work went quite well, and by the late '70s and early '80s we saw the first commercialization of AI research in the growing expert systems area. Industrial investment hid the leading edge of research cuts, and the emerging market thrived on techniques developed in the previous decade. It wasn't until the mid-to-late '80s that the lack of new research transitioning to industry made itself really felt—the seed corn had been consumed, and there was a dearth of new crop.
Around this same time, the AI research community made its own near-fatal error. As the expert systems market grew, researchers started feeling the pinch of the research cuts. Misconstruing correlation for causality, many assumed that the attention paid to the industrial, expert systems, end of AI was responsible, missing the fact that the cuts had been coming for a long time. With a rising voice, researchers started to blast expert systems as "not really AI" and to point out the many difficult problems that must be solved if they were to have an impact. The community, on both sides of the Atlantic, thus avowed that the stuff that worked well wasn't really AI. The rest of us chickens thus denounced the hen that laid the golden eggs, but no other source of gold was offered in its place.
The third major factor was a change in the computer market. Exponential improvement in computing hardware, coupled with a growing interest in small machines, dropped the cost of computing to where commodity machines could run Lisp with good speed at a lower price. The market created by the AI wave also led to other languages getting their own rule-based engines, and CLIPS (C Language Integrated Production System), a widely available product, started to become heavily used. CLIPS, of course, didn't need a $50,000 special-purpose machine to run on; it could run on pretty much anything that could run C, which meant any of the growing wave of small, and eventually personal, computers. Computers' exponential power curve, coupled with the incredible decreases in memory cost, meant that by roughly 1987, the Lisp machine market was dead.
So let's review the storm:

    • cuts in research funding five to 10 years earlier reduced the amount of new AI technology transitioning from the universities,

    • the research community disowned the success of the expert systems technology, and

    • changes in the computing milieu outside the AI venue caught our community off guard.

AI winter—the '10s?
So what's happening today? Let's start with the funding side. In the US, DARPA's Information Processing Technology Office (IPTO) has invested significant amounts in AI. Many US researchers don't realize this because the funding has gone largely to machine learning, robotics, and language technologies, with a higher proportion of the funding going to corporate researchers than in the past. However, the current DARPA administration has clearly preferred to fund AI over most other areas of computer science. At the US National Science Foundation (NSF), funding specifically for computing research has taken a hit, owing to a combination of decreasing budgets and a preference for funding computing as part of larger science efforts, such as bioinformatics, rather than in its own right. So, while AI scientists are still well funded, many of them are working in larger projects that don't focus on AI.
Over in the EU, the Sixth and Seventh Framework Programmes have been good to AI, especially to research in the Semantic Web and related areas. AI researchers in many areas have been involved in this work. Some are merely rebranding their earlier work as "ontology related" and continuing what they did before. Increasingly, however, others are working with the "real webbies" and finding new, exciting uses of AI. Unfortunately, funding for university researchers has all too often come with an expectation of fast transitions to industry. Thus, "picking the low-hanging fruit" has sometimes replaced solving the hard problems as a priority for the funders. The latest round of EU funding has deemphasized the Semantic Web research component, preferring to see this technology cross-breed with the traditional sciences and other fields. This trend is similar to what's happening at the NSF.
With a new administration coming into place in the US, chances are extremely high that DARPA's AI funding will take a big hit. However, this won't happen at once. Current programs will likely run their course, and then AI funding will probably be "normalized" with respect to other parts of the field that rightly feel they have been neglected. Even a new management favorable to our field will likely cut AI budgets as a percentage of the whole, and a management that is neutral or views AI unfavorably would only be worse. The NSF's mainstreaming of AI and the EU's attempts to foster more transition of the Semantic Web community will both likely have the same effect—cuts in AI research as a percentage of the overall computing budget.
Outside academia, AI successes in search and language technology, robotics, and the new "Web 3.0" applications are starting to transition to industry in an exciting way. Unfortunately, this application of AI to industry is growing just as we see the first signs of the coming research cuts. Already, it's not too hard to find some people in the AI research community who are happy to disown the Semantic Web, disparage companies such as Powerset that are trying to apply language technologies to search, or happily say that modern robotics, while impressive, isn't really part of AI.
On the computing milieu, we're definitely in a time of change. The field of computing is trying to come to grips with new ideas such as "cloud computing," social-networking sites are replacing search engines and news sites as people's favored homepages, and a new generation of applications are centering on large groups of people collaborating over an increasingly distributed network. I won't try to predict the future of these waves (at least until a later editorial), but one thing is for sure: many of the entrenched models of AI will have to be rethought as computers change from application devices to social machines.
And now you have it—the reason some uncomfortable prickles are running up and down my spine. We see

    • a potentially large cut in funding that will really start to be felt a few years hence,

    • an AI research community that's still ready to kill the golden geese, and

    • a potential phase transition in the nature of computing that threatens to disrupt the entire computing field, including AI.

We're not there yet, but the storm clouds might well be on the horizon if you know where to look!
Weatherproofing against climate change
If I'm right, what can we do about it? I think if we act now, while we're still in good times, we can prevent, or at least mitigate, the coming storms.
First, we must find ways to pull together as a field and to document the many successes occurring today. Instead of disowning applied AI, let's embrace it and ensure we acknowledge the successes we see. Tell people how great it is that robots can autonomously navigate through urban terrains. Play with the new search products and identify their strengths, not just their faults. Get excited that little bits of AI can be found at the bottom of a growing number of Web applications. It's okay to make it clear there's still a lot of research to be done, but do it by affirming current AI as the base on which you build, not as a competitor with which to contend.
Second, it's incumbent on us to document the past decade's successes. Several large DARPA projects have, for example, made amazing strides in integrating diverse AI approaches into integrated systems. However, publishing papers on such systems is difficult, so many people haven't been exposed to the techniques that were developed in these products. The transitions from US and EU research funding to the emerging Semantic Web (Web 3.0) industry must be documented while they're still occurring. Machine learning and many associated areas have seen tremendous growth under the past few years of funding. Let's explain, in reports that people who aren't AI experts can read, the advances this has enabled and why continuing the research is important.
Conclusion
The AI field, as I've said often, is in great shape—and if you don't believe me, just wait till you see the incredible young researchers we feature next issue as "AI's 10 to Watch"! However, it's while we're at a high point that we need to take action to avoid a fall. Even if I'm being wildly pessimistic, documenting the transitions and successes occurring today certainly won't hurt. Pulling together, documenting our successes, and acknowledging the excitement going on today is the recipe for success.
Yours as always,




37 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool