The Community for Technology Leaders
Green Image
Issue No. 02 - March/April (2005 vol. 20)
ISSN: 1541-1672
pp: 2-3
James Hendler , University of Maryland
Intelligent Readers,
In my past, both as an AI researcher and a Darpa employee, I've spent a lot of time dealing with both sides of something that consumes almost all of us—research funding! In this letter, I want to reflect on funding trends and how they might affect us in the intelligent systems community. (For actual advice on getting your research funded, consider my tutorial slides at
A talk I gave at AAAI 2000 called "Missed-perceptions: AI vs. the Funding Community" started with this multiple-choice quiz:
Which of the following is a true statement?
A. AI, as a discipline, is not well regarded by the funding community.
B. AI researchers get a disproportionately high percentage of CS-related funding.
i. Only A
ii. Only B
iii. A and B
iv. Neither A nor B
Did you guess iii? If so, you understand the essential contradiction in which many of us live as AI researchers. As a field, we often do quite well in the funding hunt, but we rarely get to work on the stuff we really want, and, as AI researchers, we get little respect for what we do.
I think this is partly because as researchers we tend to live down in the weeds of our research, seeing AI as many tenuously connected subfields competing for funding and status. However, to the next level of the food chain, we're all "AI researchers" at best. Worse, when we move up a level, to where the dollars really live in the funding agencies, we're just a part of "information technology," competing with many other CS and non-CS fields for attention. Big trends, such as bioinformatics or so-called cyberinfrastructure, come to the view of the governments on which we rely. But to a prime minister or congressman who's deciding whether to cut the budget at your country's science-funding agency, the question isn't what you work on, but "What have you done for me lately?"
How has funding changed?
Nowadays—and it doesn't matter where you live—the day of the individual-researcher, million-dollars-a-year grant is nearing an end. Increasingly, funding agencies are looking to fund teams of researchers working on bigger problems. Funding is moving in this direction because a combination of long-term trends and short-term goals are pushing the funders toward a less risk-averse and more applied funding portfolio.
Why does this lead to the funding of teams over individuals? When a funding program backs a team, especially one consisting of top scientists, it's hedging its bets. Even if a couple of these folks don't get it together, someone will likely do something that can be bragged about at a higher level (increasingly the metric by which funding agents are judged). Add to this that the funding program gives the whole team $1M or such, instead of providing that much to each individual, and this approach's lure becomes obvious.
Interdisciplinary work, however, can sometimes move us out of this firing line. When someone asks you what you do,the answer "I've been involved in developing methods for sequencing the human genome" sure sounds better than "I work on constrained search over parameterized strings." Work that impacts on health, commerce (including e-commerce), big science, transportation systems, or manufacturing, to name just a few, is on the radar scopes of the people who control the big budgets. That's the good news for those of us who see intelligent systems as crossing boundaries and who push our students to learn more than just "textbook AI."
This is a mixed blessing, however, because it's exactly those things that are on their radarscopes. When we help a scientist solve a major problem, funding agents rave about bioinformatics. When the new intelligent transport network eventually comes along, they'll be sure to say, "Wow, those civil engineers are amazing!" Work on the next generation of the Web, and it's, "Well heck, you must be working for Google."
Where's the respect for the AI discipline beating the world's chess master that matches the respect for the biologists who sequenced the human genome?
Our discipline at risk?
An impact of this funding change worries me: the short-term effects of this new funding model and how it affects the next generation, our students. There are times and places where the science, and particularly the mathematical discipline, underlying IS is crucial to advancement.
Most papers we reject at this journal have an existence proof, via an implementation, but no explanation of why it works or where it breaks. This isn't to say all submissions should be full of Greek symbols or complexity results. However, I can't tell you how nice it is when I work with scientists and they say, "Oh, that works because of the McGurk effect," or "You'll never be able to prove that because of the Heisenberg uncertainty principle."
The fact is, much of our IS work is done without recourse to a truly powerful underlying theory, and current complexity theory isn't going to cut it in explaining agents, heuristics, robotic interactions, or a lot of the other things we worry about. New science is needed to take us into the future, and our funding doesn't seem to be funding that! Getting our students well versed in our field's esoterica, a key purpose of academia, is hurt greatly when their thesis is funded by someone who wants them to "make the database work better." Too often AI researchers are funded to pick the "low-hanging fruit," which seems to me exactly what we should be teaching our students not to do.
Pushing our students into being functionaries on ever-larger teams so that they can receive their research stipends also puts their futures at risk. Those who go on to be academics or to work in research laboratories are evaluated by the rules of fields that don't understand the funding pressures we computer scientists work under. Furthermore, while most other computing subareas are coming to realize the value of publishing outside "write only" archival journals, it's a slow change. It's hard to defend tenure for a junior faculty member who has brought in millions of dollars by being on a number of interdisciplinary teams but hasn't published much in the CS literature. Worse, when these candidates come up against mathematicians and physicists at the higher levels of promotion, how do you defend a candidate who is billed as "a good team player" against someone who has written a widely cited paper on, say, understanding the Permian-Triassic extinction?
What can we do about this?
We can't change the funding picture; that's a long-term trend (besides, many of us like doing interdisciplinary work!). However, we can work on changing the perception and practices in our field. The splintering of the AI discipline hurts us no end. If we can mend some of our own fences and start generating combined approaches to major AI problems—or at least cite each others' work, for Pete's sake—the different methodologies we use in the different IS areas will stop being a local barrier to funding, publishing, and promotion.
Bottom line: we have too much zero-sum thinking in what isn't a zero-sum game. Work to recognize the best students in our field, even if they're not in your subdiscipline. Recommend accepting a paper you find exciting, even though the methodology isn't what you would prefer. And definitely stop unduly badmouthing each other on funding review panels. As we realize that being a better team player is crucial to being a success in our field, we need to start being better team players ourselves.
The talk I gave at AAAI 2000 quoted Ben Franklin, who said to his colleagues, "We must indeed all hang together, or, most assuredly, we shall all hang separately." True then, and true of our field today. So in a nutshell, my advice is, Say It Loud: AI and Proud!

252 ms
(Ver )