The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - July/August (2005 vol.20)
pp: 2-3
Published by the IEEE Computer Society
James Hendler , University of Maryland
ABSTRACT
An ever-tightening definition of "mainstream" AI and an inbreeding of the field are causing great harm at a time when AI should be booming.




Intelligent Readers,
What's wrong with us?
In a field such as ours, with so much promise, facing what many consider the most significant scientific challenge (the quest to explain intelligence), how can we be so stupid when it comes to the narrow-mindedness we often show when defining that field? Don't we see the damage we do by taking an all-too-narrow view of what AI is and how we should practice it? Yet when I look at our major conferences and societies, I see an ever-tightening definition of "mainstream" AI and an inbreeding of the field that are causing great harm at a time when AI should be booming.
Why am I so upset? It's partly a response to several recent incidents and partly self-anger—I've been part of the problem and now realize my mistake. Let me explain.
Of committees and fellows
The first realization came during a recent email discussion about the composition of the AAAI conference's program committee. The organization received an email pointing out too much overlap between our senior program committee and that of one of the AI subarea conferences. As the chair of the AAAI Conference Committee, I was asked to respond, and I did. I pointed out that there was indeed overlap with that other program committee but that because many of these senior AI people served on committees for a number of other AI conferences, there was overlap with them as well.
Mea culpa! The more time I've had to think about it, the more I realize that my words shouldn't have been offered as an explanation but as an abject apology! Consider: when we look at the program committees of IJCAI, ECAI, AAAI, and other major AI conferences, should we be looking solely at overlap with conferences in areas such as machine learning, planning, knowledge representation, and probabilistic logic? What about conferences in artificial life, cognitive science, neural networks, and other such areas? Comparing the main AI conferences with these related ones, we see a different story: almost no overlap at all.
If this narrowness involved only the conference committees, it would be a problem, but this is just an indicator of a larger issue. For example, if we look at the invited speakers at the major conferences, we don't see a lot of evidence for a wide definition of AI. With the laudable exception of IJCAI's choice of Geoff Hinton for the Award for Research Excellence, the bulk of speakers at the major conferences for the past few years have been from the same mainstream AI communities as the program committees. Rather than working hard to expand our conferences' coverage of the field and inviting speakers who stretch our definitions, we appear to feel comfortable reviewing and re-reviewing a relatively coherent mainstream view of the field.
Or consider this. Now is the time of year when some AI societies announce their new fellows. This year, the AAAI and ECCAI fellows lists comprised excellent people well deserving of the award (including ECCAI's choice of our former editor in chief Nigel Shadbolt. Congrats, Nigel!). However, looking through the past few years' appointments, the same trend seems obvious: with rare exceptions, the recipients were from the mainstream. None seem to be from the "embodied AI" (sometimes called "new AI") community, and precious few are from AI's soft-computing side.
After looking at the awardees, I emailed the chair of one of the fellows committees, pointing this out and suggesting we were getting too narrow. He admitted that the range of recipients could be broader, but he said, "Committee members find it tough to turn down top-quality mainstream people to make room for top-quality non-mainstream people—which of these people would you have turned down to make room for [someone else], for example?" And that's the problem in a nutshell. If we view this as a zero-sum game, then the rational choice would seem to be to bolster our core.
A dangerous status quo
I've written before about the danger of this zero-sum thinking with regard to funding ("Fathoming Funding," Mar./Apr. 2005), so I won't repeat that argument here. However, in this context, that same thinking leads to an even more fundamental problem. While I can't speak to the AI hiring situation outside the US, this year I saw several people looking for US academic positions in AI who had amazing qualifications but, in the end, few interviews and fewer offers. Knowing how incredible some of these candidates were, I talked to many people who were involved in AI hiring decisions at various institutions. Over and over I heard the same things: "We were looking for someone in machine learning," "They didn't quite fit in with our view of the field," and "We needed someone more mainstream." (Interestingly enough, these people got offers from places such as Google, Microsoft Labs, Yahoo, and Amazon.com, showing that industry is willing to hire the best and brightest graduates, especially when they don't fit the narrow definitions often used in academe.)
And that's what scares me. The fruit of all this—the conference committees, speakers, awardees, and the like—is the visibility and attention paid to those in the mainstream, which is becoming a self-reinforcing definition of what succeeds in AI. This makes it easier for those in AI departments to argue for hiring in these more narrowly defined fields and to promote those who don't stray far from the mainstream. It also lets us continue to offer the same AI courses we've been teaching for the past 20 or more years (a topic for a future editorial). Our students in turn manage to learn the same view of AI that we did and, as they continue through their careers and become the conference chairs and fellows selectors, to choose those most like themselves for other honors. There's a term for this in genetics—selective inbreeding—and while it's often used to "purify the strain" of domesticated animals, in human populations it has always had the most disastrous results. We shouldn't repeat those mistakes with our field!
Putting our content where our mouth is
Eldridge Cleaver is often quoted as saying, "you're either part of the solution or you're part of the problem." To that end, let me tell you what IEEE Intelligent Systems will do about the problem I've been discussing. First, we're currently filling the editorial calendar, choosing special issues for the next couple of years, and we're endeavoring to widen the topics presented. We're soliciting special issues on topics relating AI to fields such as artificial life and evolutionary computing and are exploring how we can widen our coverage of AI in the arts and entertainment and other such areas. We're also working on broadening our editorial board to include people from less represented areas of the field, and asking new members to not only review articles but also solicit high-quality material in areas related to the quest for intelligent systems that we might be underrepresenting in our content.
Conclusion
In short, I'm committing this magazine to trying to help bring together the splintering field of AI. I know it's not much, but it's a start, and if we all take what small steps we can, the field will be much better off in the future.




PS: Sometimes I wonder if anyone out there actually reads these editorials. Please feel free to let me know whether you're reading them, what topics you'd like covered, or whether you have issues with what I've written, and to offer advice about how to fill this space.
15 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool