The Community for Technology Leaders

From the Editor in Chief: The Quiet Revolution

Nigel , University of Southampton

Pages: pp. 2-4

A recurrent theme in my editorials over the past 30 months has been the success story that is artificial intelligence. I've argued that, despite our failure to deliver Stanley Kubrick's HAL or Stephen Spielberg's David, we've

been busy providing firm foundations for intelligent systems. We can see some of these achievements in the May/June IEEE Intelligent Systems, which was devoted to AI's Second Century. In my last editorial, I discussed the likely development of ambient intelligence—a pervasive, ubiquitous computing fabric in which many kinds of routine intelligence will permeate our environments. Light switches that configure themselves to a particular lighting arrangement might seem prosaic; central-heating furnaces that let the engineer know when routine maintenance is due might seem unremarkable. Web services that classify a document against an existing taxonomy and word processors that spot stylistic infelicities might seem mundane. However, these micro-intelligences are the essential and necessary first steps in any widespread deployment of the results of our discipline.


In 1988, psychologist Don Norman wrote an excellent book called simply The Psychology of Everyday Things. In it he urged students and researchers alike to examine the routine and familiar objects that surround us. He asked them to consider the extent to which these objects displayed interesting psychological phenomena. He used a number of compelling case studies to illustrate that many designed objects violated basic principles of cognitive ergonomics. Take the humble shower—two variables, how fast and how hot. Then consider the myriad taps, faucets, levers, and handles you've encountered that make this basic control a thing of tortuous complexity. Or take the way in which the controls for a stove's burners are projected into a layout that requires you to transform, map, reflect, and project in your mind's eye to determine what controls what.

Norman works in human-computer interfaces too, and software and application designers have adopted many of his ideas. But nowhere near enough people have read his elegant and appealing analyses. Because a routine kitchen appliance can boast a psychological dimension, why not embrace the "intelligence of everyday things"? I would much rather have an interface that learns, from my repeated attempts to save a file in one place, to override its dumb default that puts the file in the place from which the application was last launched. Give me a device that uses a good biometric method to tell me the name of the person I'm talking to, whom I know I've met before. Serious political heavyweights have people whispering such information in their ear all the time. Why should they have all the fun?


Another theme I've alluded to in previous editorials is certainly bringing about a revolution: the continual increase in computational power at our disposal. In "Brute Force and Insight" (Nov./Dec. 2001), I argued that raw power was letting researchers tackle problems and exploit methods that would have been inconceivable a few generations of machine hardware ago. In "Grandly Challenged" (Jan./Feb. 2003), I mentioned that at a recent UK workshop we discussed the idea of building systems to store, index, and manage an individual's experiences over his or her lifetime. This idea has a long history in computing. The Memex machine described in Vannevar Bush's "As We May Think" (July 1945 Atlantic Monthly) is one articulation. Currently, Microsoft's Gordon Bell is busy archiving his life in a project called MyLifeBits. Ted Nelson, who originated the term hypertext, has also been squirreling away his life as audio and video tapes, emails, notes, and documents of every kind.

What's starting to make these digital autobiographical endeavors really exciting is the convergence of computing power and storage capability to the task's requirements. The biblical life span of three score and ten years is approximately 25,550 days of experience, which equates to 613,000 hours or 2.2 billion seconds. Suppose we reserve 100 kbits per second for a compressed audio-video stream. This is pretty impoverished as a record, but it gives us something. Using this benchmark, a lifetime of audiovisual content is 27.5 Tbytes of data. Currently that would require 343 hard drives, each with 80 Gbytes of capacity. If we start the experiment now, in two years' time we'll have six hard drives' worth of data—but in the meantime, storage capacity is at least doubling every 18 months.

Alan Dix has taken this analysis to its ultimate conclusion ("The Ultimate Interface and the Sums of Life?" Interfaces, Spring 2002) and suggests that by the end of an experiment started today, storage capacities will have increased 12 orders of magnitude—a trillion times more capacity stored at ever smaller scales! He estimates that at 100 kbits per second and at 1,000 atoms to store a bit, the curves suggest that by 2073 your life would fit on a grain of sand. Of course, we'll be busy recording more and more in richer and richer representations; I'm sure we'll arrange to keep soaking up memory capacity. Moreover, the opportunities this hardware evolution offers will require techniques and methods that are bound to originate in intelligent systems and AI research. Specifically, this includes the problems of modeling, annotating, linking, and retrieving content.


However, science and technology don't exist in a vacuum. Clearly, social issues surround much of what we do. The capabilities we're developing are raising serious ethical dilemmas. The funding we receive depends partly on the political and economic context in which we find ourselves. Public concerns can arise when people perceive threats to personal liberties and freedoms. In "The Shape of Things to Come" (Sept./Oct. 2001), I outlined how the terrible events of 11 September 2001 would change the research landscape in which we work. Nothing new here; as I observed, the requirement for national security and effective military capabilities has been significantly fueling our field for some time. In the two years since that editorial, we've seen ample evidence of these requirements driving the flow of funds.

The programs emerging from these funds have also given rise to public anxieties. In "Someone to Watch over You" (Mar./Apr. 2003), I discussed the technologies that are available or on the horizon that will enable increased levels of surveillance. In this area we've seen recently just how potent the collision of funding and public concern can be. DARPA's Information Awareness Program relaunched its proposal for a Total Information Awareness System as a proposal for a Terrorist Information Awareness System. The original proposal aroused much debate and controversy, receiving a range of critical media reviews. DARPA's LifeLog program ( is trying to do something similar to creating a complete individual record of experience. This too has attracted unfavorable comment in some quarters. Problems arise when research programs are constructed in a social or ethical vacuum. The issues involved in any kind of comprehensive or intelligent information surveillance must be considered at the outset. In other influential domains—reproductive biology, genetically modified crops—various countries have instigated powerful overseeing authorities, some of which have determined policy that even precludes some kinds of research. The question arises—do we need equivalent watchdogs and safeguards for the advanced information-processing technologies we're researching?


I don't mean for this editorial to be a metalevel reflection on how prescient your editor in chief is. Rather, I want to point out that IEEE Intelligent Systems is in the happy position of presenting material that really is pervasive and at the leading edge of work that's transforming our world. Our discipline's technologies are being quietly deployed everywhere—on trains, planes, and automobiles; from operating theaters to recording studios; and inside both Microsoft's best-selling applications and the computing infrastructure that IBM, SUN, Oracle, HP, and almost every other large IT corporation sells.

It's this magazine's job to make people aware of this quiet revolution—to make the important technical results clear and accessible, and set them in context. We can do this in a way specialist journals can't. All this brings me to another quiet revolution. At our last editorial board meeting, we learned that our citation rates are high, we have a solid subscriber base, many more people access us through the IEEE Computer Society's Digital Library, and the IS Web site receives hundreds of thousands of hits. I've also learned that Intelligent Systems' impact factor is the second highest of all IEEE CS Publications. This is music to any editor in chief's ears, and the editorial board felt that we should exploit our influence. You'll be hearing of a number of initiatives over the coming months.


Finally, it's my great pleasure to welcome to our editorial board Russ Altman, Subbarao Kambhampati, Enrico Motta, Lynne Parker, and Stefan Staab (see the sidebar for their biographies). A strong editorial board is vital if we're to make the most of our magazine and bring to wider attention the quality of work underway in our community.



New Editorial Board Members

Russ Biagio Altman is an associate professor of genetics and medicine (and of computer science by courtesy) at Stanford University. He also directs the Stanford Center for Biomedical Computation. His primary research interests are in the application of computing technology to basic molecular biological problems of relevance to medicine. He holds an MD from Stanford Medical School and a PhD in medical information sciences from Stanford. He is a fellow of the American College of Physicians and the American College of Medical Informatics. He is a past president and founding board member of the International Society for Computational Biology, an organizer of the annual Pacific Symposium on Biocomputing, and an associate editor of Bioinformatics and Briefings in Bioinformatics. Contact him at the Dept. of Genetics, 300 Pasteur Dr., Stanford, CA 94305-5120;

Subbarao Kambhampati is a professor in Arizona State University's Department of Computer Science and Engineering. His research interests are automated planning and information integration. He directs ASU's Yochan research group. He received his bachelor's degree from the Indian Institute of Technology, Madras, and his MS and PhD from the University of Maryland, College Park. He received a 1994 National Science Foundation young investigator award, and his PhD dissertation received the ACM Samuel Alexander award. Contact him at the Dept. of Computer Science and Eng., Arizona State Univ., Tempe, AZ 85287-5406;;

Enrico Motta is a professor of knowledge technologies and the director of the Open University's Knowledge Media Institute. His current research focuses on Semantic Web technologies—especially Semantic Web services and the application of Semantic Web technologies to knowledge management. He's a member of the executive boards of the OntoWeb Thematic Network and the joint US/EU Semantic Web Services Initiative Architecture Committee. He is also on the editorial boards of the International Journal of Human-Computer Studies and Web Semantics: Science, Services and Agents on the World Wide Web. He wrote Reusable Components for Knowledge Modelling (IOS Press, 1999). He has a 1st Degree in computer science from the University of Pisa and a PhD in artificial intelligence from the Open University. Contact him at the Knowledge Media Inst., The Open Univ., Milton Keynes, MK7 6AA, UK.;

Lynne Parker is an associate professor in the Department of Computer Science at the University of Tennessee, where she also directs the Distributed Intelligence Laboratory. She received a US Presidential Early Career Award for Scientists and Engineers, a US Department of Energy Office of Science Early Career Scientist Award, and a UT-Battelle Technical Achievement Award for Significant Research Accomplishments. She also serves on the editorial board of IEEE Transactions on Robotics and Automation and on a National Research Council scientific advisory panel for the Army Research Laboratory. She received her PhD in computer science from MIT. She's a member of the IEEE, the AAAI, the ACM, and Sigma Xi. Contact her at the Dept. of Computer Science, 203 Claxton Complex, 1122 Volunteer Blvd., Univ. of Tennessee, Knoxville, TN 37996-3450;;

Steffen Staab is a senior lecturer at the University of Karlsruhe's Institute for Applied Informatics and Formal Description Methods (AIFB). His research concentrates on building and using explicit semantics. He is involved in several American research projects (such as OntoAgents and Project Halo) and European research projects (such as WonderWeb and Dot-Kom), and is coordinating the Semantic Web and Peer-to-Peer for knowledge management (SWAP) project. He is the department editor of IEEE Intelligent Systems' Trends & Controversies and an editorial board member of In Thought & Practice. He is cochair of the Semantic Web track of WWW 2004 and of the Starting AI Researchers Symposium (STAIRS 2004) at ECAI 2004. He received his habilitation ( Privatdozent) from the University of Karlsruhe. Contact him at Inst. AIFB, Univ. of Karlsruhe, 76128 Karlsruhe, Germany;

65 ms
(Ver 3.x)