The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2003 vol.18)
pp: 18-24
Published by the IEEE Computer Society
Tim Menzies , West Virginia University
ABSTRACT
<p></p>




"Take pride in how far you have come; have faith in how far you can go." —Anonymous
In the 21st century, AI has many reasons to be proud, but it wasn't always this way. New technologies such as AI typically follow the hype curve (see Figure 11). By the mid-1980s, early successes with expert systems 2-5 caused skyrocketing attendance at AI conferences (see Figure 2) and a huge boom in North American AI startups. Just like the dot-coms in the late 1990s, this AI boom was characterized by unrealistic expectations. When the boom went bust, the field fell into a trough of disillusionment that Americans call the AI Winter. A similar disillusionment had already struck earlier, elsewhere (see the " Comments on the Lighthill Report" sidebar).


Figure 1. The hype cycle for new technology.



Figure 2. Attendance at the National Conference on Artificial Intelligence (AAAI), the senior American AI conference. Figures prior to 1984 are not available. No figures are shown for 1985, 1989, and 2001 because these were IJCAI (International Joint Conference on Artificial Intelligence) years.

If a technology has something to offer, it won't stay in the trough of disillusionment, just as AI has risen to a new sustainable level of activity. For example, Figure 2 shows that although AI conference attendance numbers have been stable since 1995, they are nowhere near the unsustainable peak of the mid-1980s.
With this special issue, I wanted to celebrate and record modern AI's achievements and activity. Hence, the call for papers asked for AI's current trends and historical successes. But the best-laid plans can go awry. It turns out that my "coming of age" special issue was about five to 10 years too late. AI is no longer a bleeding-edge technology—hyped by its proponents and mistrusted by the mainstream. In the 21st century, AI is not necessarily amazing. Rather, it's often routine.
A MATURING TECHNOLOGY
Evidence for AI technology's routine and dependable nature abounds. For example, in this issue (see the related sidebar for a full list), authors describe various tools to augment standard software engineering:

    • Yunwen Ye describes agents that assist software engineers using large libraries of components.

    • Bernhard Peischl and Franz Wotawa show how to use AI diagnosis tools on software source code.

    • Gary Boetticher demonstrates how well AI can learn effort estimations for software projects.

In other work, the AI field has generated many mature tools that are easily used, well documented, and well understood. For example, late last year, one of my undergraduate research assistants mentioned nonchalantly that he'd just run some data through four different data miners! That student was hardly a machine learning expert—and in the 21st century, he didn't need to be. The Waikato Environment for Knowledge Analysis toolkit (see Figure 3) contains dozens of state-of-the-art data miners, all tightly integrated around the same underlying database and object model. Weka is free, open source, well documented, 6 compatible on many platforms, and easy to install (it took my student less than three minutes to download, install, and start running the learners). You can access it at www.cs.waikato.ac.nz/~ml/weka/index.html.


Figure 3. The Waikato Environment for Knowledge Analysis tool.

Natural language processing is another example of AI's success. In times past, NL processing was very difficult with a low chance of success. These days, researchers can rely on numerous tools to build successful NL applications. For example, NL processing often requires extensive background knowledge of the words being processed. Many general ontologies are now freely available. These public-domain ontologies range from WordNet (a lexical database for English) to OpenCyc (a formalization of many commonsense concepts). More specific ontologies are also freely available, such as the Unified Medical Language System (see Figure 4).


Figure 4. Part of the Unified Medical Language System semantic network ( www.nlm.nih.gov/research/umls/META3.HTML). Each child in the hierarchy is linked to its parent by the isa link.

Overall, the ontologies are extensive. For example, WordNet covers 111,223 English words, and UMLS's January 2003AA edition includes 875,255 concepts and 2.14 million concept names in over 100 biomedical source vocabularies, some in multiple languages. Building such ontologies is a huge task, and David Schwartz (in this issue) discusses a global initiative to build semantic dictionaries via the World Wide Web.
Apart from ontologies, executable NL tools are also readily available. For example, Debbie Richards recently led a small university team that implemented a system to detect contradictions between different NL sentences in an object-oriented design. 7 In the 1980s, such software would have only been found in science fiction. But in the 21st century, the Richards team had very little NL processing to implement. They just added (for example) negotiation tools to standard NL components. Those components included an answer extraction system from NL, a formal concept analysis component that generates a visualization of the text, and Prolog and Java tools that implemented the other components. This menagerie of tools seems complex. However, AI components are now mature enough to enable simple combinations.
Challenges
AI still can't be smug, despite the successes listed in the " AI Applications" sidebar. Although some AI areas are mature, there's still much to learn and some traps to avoid.
In his invited talk at AAAI 1999, Nils Nilsson argued that the easy days of AI are over:
The easy work (such as inventing A* and the idea of STRIPS operators), is over. AI is getting harder. In addition, AI researchers will have to know a lot about many related disciplines.
Nilsson offered Figure 5 as a partial list of related disciplines. He warned against a fission effect, which could tear apart the field. Paradoxically, this effect results from AI's success:
Fission is promoted by the tendency of AI to be pulled apart by the many adjacent disciplines that join with AI to field practical, large-scale applications in specialized niches.
In fact, some computer scientists and others might go so far as to say, "Why do we need AI as a separate field? One could carve it up, add the parts to adjacent fields and get along perfectly well without it."


Figure 5. Near neighbors to AI.

Nilsson then proposed several large challenge problems to maintain a coherent field of study in AI. For the record, his hot list of near-term research includes case-based reasoning (again) for planning; using logic (again) for planning; SAT encodings of planning problems; large, reusable knowledge bases; agent (robot and softbot) architectures; agent-to-agent communication languages; more expressive Bayes nets; Bayes net learning; and genetic algorithm and programming techniques.
Although Nilsson's comments are timely, I'm more confident than he about AI's future as a coherent discipline. The long-term goal of emulating general human intelligence remains, and that goal will bind future generations of AI researchers. The successes listed in the " AI Applications" sidebar show that you can achieve much without human-level sophistication. Nevertheless, I still dream of the day when my word processor writes articles like this one while I go to the beach.
The Road Ahead
The goal of creating general human-level intelligence has inspired, and still inspires, decades of talented graduate students who flock to the hardest problem they know. These students strive to distance themselves from those working on other well-defined, mostly solved problems. Hence, these students are always proud to boast that they are working on AI.
It's bad manners to form an army if you can't feed them. But our AI graduate students won't starve. As they work toward the long-term goal of human-level intelligence, they'll still be able to pay the rent using AI, for example, by working in the emerging gaming industry. This industry is already huge (approximately 17 billion dollars revenue in 2002 8) and is still growing. Our AI workers will stay busy building the next generation of gaming softbots. 9 As the World Wide Web grows, these softbots will have access to "eyes" that can see more information than any human intelligence could see in a lifetime. As we continue to use software to control our world, these softbots will be given increasingly sophisticated "arms." With these eyes and arms, systems have a rising opportunity to learn and influence the world.
For another AI meal ticket, consider the growing field of model-based software engineering. Safety concerns are forcing the aviation industry to use MBSE. More planes are flying each day, but the odds of a software error are constant. Unless we can reduce the rate of software errors, by 2030 there will be a major air traffic accident reported daily. MBSE tools allow for early life-cycle software simulation, verification, and validation. Furthermore, they remove the need for laborious, possibly error-prone, manual code generation. So, the aviation industry is rapidly maturing MBSE. Soon, the broader software- engineering community will be able to access and use MBSE tools. When that happens, accurate declarative descriptions of all software will exist. Bring on the AI! For example,

    • Use case-based reasoning to find model-based components that are relevant to the current development

    • Apply search methods or constraint satisfaction tools to optimize verification

    • Work within knowledge acquisition and maintenance environments to enable faster model collection and modification

Will MBSE or softbot manufacturers use the term AI? Of course! A rose by any other name is still implemented using AI. Softbots will use search methods and data mining techniques. MBSE will still need to understand its knowledge representations' logic. These systems will integrate via the high-level languages we developed using ontologies we built and debugged. As we struggle to implement, understand, and optimize MBSE-built agents running around the World Wide Semantic Web, researchers will still rush to read the latest issues of Artificial Intelligence Journal, the proceedings from AAAI and IJCAI, and (of course) IEEE Intelligent Systems.
Conclusion
Modern AI workers can be very proud. Much has been accomplished. We have survived the birth trauma of this new technology. We have developed tools that enabled numerous landmark applications. We have matured those tools into dependable and reusable components. And we still inspire the smartest minds to work on the hardest problems.
As proud as we are, we mustn't be smug. Consider the list of landmark events shown in Table 1. Compared to any of those, is AI remarkable enough to be memorable in, say, 200 years time? I think not—but that can change. AI's mark in history could be prominent and permanent if the 21st century becomes the birthday of this planet's second intelligent race. Your own work—past, present, and future—will decide.

Table 1. Remarkable events in the 20th and 21st centuries.


Nils Nilsson, Enrico Coiera, and numerous contributors to the comp.ai newsgroup kindly shared their lists of landmark AI applications. Also, Nigel Shadbolt and Chris Welty offered useful and timely advice during this issue's planning.
I conducted this research at West Virginia University under NASA contract NCC2-0979 and NCC5-685. The NASA Office of Safety and Mission Assurance under the Software Assurance Research Program led by the NASA Independent Verification and Validation Facility sponsored this work.

References

Tim Menzies is the software engineering research chair at NASA's Independent Verification and Validation Facility. His research interests include data mining, software engineering, knowledge engineering, and verification & validation. He received his PhD in artificial intelligence from the University of New South Wales, Sydney, Australia. He is a member of the IEEE and ACM. Contact him at tim@menzies.us; http://menzies.us.
52 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool