The Community for Technology Leaders

Guest Editors' Introduction: Intelligent Web Services

Alun Preece, University of Aberdeen
Stefan Decker, Stanford University

Pages: pp. 15-17

According to folklore, an ancient Chinese curse says, "May you live in interesting times," and interesting times usually are turbulent and difficult. The times we live in are not only interesting but also exciting. We are in just year 13 of the big bang that created the World Wide Web universe, a global information space previously unknown to mankind, and this universe is evolving. Many people now see Web Services as the next generation in the Web's evolution. In the traditional Web model, the main activity is browsing: users follow hypertext links. In the Web Services model, the main activity is remote procedure calling: users invoke preprepared tasks that facilitate some useful activity. Examples of Web Services include services supporting

  • Meaningful content-based discovery of information sources
  • Fusion of information from multiple sites
  • E-marketplaces—for example, auction houses
  • Convenient querying by users—for example, using natural language
  • The integration of conventional office workflow with Web publishing

The general aim is to turn the Web into a collection of computational resources, each with a well-defined interface for invoking its services. Currently, the main development focus of Web Services is rather narrow, concentrating on XML-based infrastructure issues. These include service description languages such as WSDL (Web Services Description Language) and service composition languages such as IBM's WSFL (Web Services Flow Language), directory services such as UDDI (Universal Description, Discovery, and Integration), and messaging protocols such as SOAP (Simple Object Access Protocol). (The sidebar lists URLs for these and other areas of interest in this article.) This special issue looks beyond infrastructure, to explore Web Services that intelligent systems technology can make possible.


One key deployment area for Web Services is e-business. E-business is intrinsically task-based: participants engage in activities such as advertising, brokering, buying, and selling. Such activities can be defined as Web Services, allowing both end users and, under appropriate circumstances, software agents to invoke them directly. The interfaces to these Web Services might be configured to support different user characteristics—for example, mobile users or users requiring speech access.

On 5 August 2001, a Workshop on E-Business and the Intelligent Web was held at the International Joint Conference on Artificial Intelligence in Seattle. One of the workshop's goals was to bring together developers of different kinds of Web Services that support e-business. Several contributions to this workshop have been revised and expanded for this special issue.


The Semantic Web initiative's purpose is similar to that of Web Services: to make the Web machine-processable, rather than merely "human browsable." Key components of Semantic Web technology are

  • A unifying data model such as RDF (Resource Description Framework)
  • Languages with defined semantics, built on RDF, such as DAML+OIL (DARPA Agent Markup Language plus Ontology Inference Layer)
  • Ontologies of standardized terminology for marking up Web resources, used by semantically rich service-level descriptions (such as DAML-S, the DAML-based Web Service Ontology), and support tools that assist the generation and processing of semantic markup

Thus Web Services are an essential ingredient of the Semantic Web and benefit from Semantic Web technology.

In July 2001, the first International Semantic Web Working Symposium took place at Stanford University. The symposium provided an early snapshot of infrastructure and applications for the Semantic Web, including several examples of Web Services that exploit semantic markup approaches. Several of these papers have been revised for inclusion here.


Each article in this issue exemplifies a distinct kind of Web Service that uses intelligent systems technology. Together, the articles provide an overview of the possibilities for adding value to the Web by using diverse techniques such as agents, semantic markup, mediation, and natural language processing.

In "Agent-Based Integrated Services for Timber Production and Sales," Andreas Gerber and Matthias Klusch describe a software-agent-based architecture that supports e-business auction services. Their application domain is an agricultural e-marketplace, for which mobile access is an essential user requirement. The prototype system is undergoing field trials in Germany.

In "ITtalks: A Case Study in the Semantic Web and DAML+OIL," Scott Cost and his colleagues from the University of Maryland, Baltimore County, analyze one of the most extensive applications of DAML, intended to provide meaningful content-based access to an extensive online resource. In the spirit of practicing what they preach, they have applied semantic markup to collect, distribute, and present information about information technology seminars online.

In "An Information Integration Framework for E-Commerce," Ilario Benetti and his colleagues at the Università di Modena e Reggio Emilia examine a mediator-based system for integrating multiple vendors' product catalogs into a single portal-style system. Their Momis architecture reconciles multiple heterogeneous ontologies and supports information fusion services.

In "The Briefing Associate: Easing Authors into the Semantic Web," Marcelo Tallis, Neil Goldman, and Robert Balzer show how to augment common office productivity applications with tools supporting the creation of semantic markup as part of users' normal workflow. The promise is that documents so produced will be immediately ready for exploitation by Semantic Web Services.

In "Embedded Grammar Tags: Advancing Natural Language Interaction on the Web," Gautham Dorai and Yaser Yacoob address the important area of speech-based Web interfaces. Their approach augments Web resources with grammar-based markup information, which improves the accuracy of natural language query engines. This technique complements most other approaches described in this issue.


The diverse selection of intelligent Web Services presented in this special issue indicates these technologies' enormous potential. The articles highlight pragmatic approaches for deploying intelligent systems techniques to perform useful, reusable tasks for Web users. Nevertheless, this area is in its infancy, emphasizing proof of concept rather than real value-adding deployed systems. The next stage is to move the technologies into full-scale showcase applications.

A significant area for debate is the extent to which applications need rich information representation and reasoning capabilities. Certainly, a trade-off exists between the resulting Web Service's functionality and the cost of developing the underlying markups and computational processes. The greater the functionality, the greater the cost. So, if we are to control costs, we might have to sacrifice functionality. The field needs to explore this trade-off in the context of real users' requirements.

As intelligent Web Services become more common, issues relating to their quality and trustworthiness will grow in importance. This is a natural consequence of the Web interaction model moving away from hands-on browsing to hands-off delegation to "black box" services. Users will want some indication of things such as how well the service has "understood" their needs, how thoroughly the service has fulfilled those needs, and how accurate and complete is the result.

However, the real power might result not from single Web Services but from combining them in new, unforeseen ways. So far, we just store documents and static data on the Web—aiming for human consumption. With the emergence of Web Services, we will also store declarative, machine-processable descriptions of how to combine those Web Services to achieve more sophisticated tasks. (First examples of Web Service composition languages are emerging—for example, WSFL or DAML-S.) Web Services are connected to real-life tasks, and these descriptions contain the knowledge of how to perform more sophisticated tasks. These composition descriptions will reside on the Web and be downloadable, understandable, and executable for everyone, not only humans but also automated agents.

Automated agents will be able to extend their own capabilities with these downloadable descriptions. And given that for every imaginable task a Web Service description that provides this task might be available on the Web (just as a large portion of the world's knowledge is available on the Web now), an automated agent might be able to extend its capabilities beyond all imagination.

If an automated agent is indeed able to perform every imaginable task, we might need to rediscuss and revise the notion of machine intelligence.


We gratefully thank the reviewers of these articles for their thorough and timely contributions: Frans Coenen, Michael Erdmann, Martin Frank, Frank van Harmelen, Jeff Hefflin, Kit Hui, Yuhui Jin, Yannis Kalfoglou, Alexander Maedche, Terry Payne, Stephen Potter, and Ana Garcia Serrano. Alun Preece, Dan O'Leary, Rudi Studer, Jim Hendler, Robert Plant, and Dieter Fensel organized the IJCAI-01 Workshop on E-Business and the Intelligent Web. Isabel Cruz, Stefan Decker, Jérôme Euzenat, and Deborah McGuinness organized the International Semantic Web Working Symposium.

Useful URLs

About the Authors

Bio Graphic
Alun Preece is a lecturer in the Department of Computing Sciences at the University of Aberdeen. His main research interests are distributed knowledge-based systems, software agents, and industrial knowledge management. He is a member of the AAAI and the British Computer Society Specialist Group on Knowledge-Based Systems and Applied Artificial Intelligence, and he is on the editorial boards of the Knowledge Engineering Review, Intelligent Data Analysis, and IEEE Intelligent Systems. He received his BSc in computer science and his PhD from the University of Wales. Contact him at the Dept. of Computing Sciences, King's College, Aberdeen, Scotland AB24 3UE, UK;;
Bio Graphic
Stefan Decker is a postdoctoral fellow at Stanford University's Department of Computer Science, where he works on realizing the Semantic Web in DARPA's DAML program. His research interests include information integration and mediation, Web Services, and, in general, the integration of knowledge representation and database technology for the World Wide Web. He did his PhD studies in computer science at the University of Karlsruhe, Germany, where he worked on ontology-based access to information. As a consulting activity, he designed and co-implemented a Web Service composition language for LastMileServices, a startup company in the Silicon Valley. Contact him at Stanford Univ., Gates Hall 4A, Rm. 425, Stanford, CA 94305-9040.
93 ms
(Ver 3.x)