From the Editor in Chief: The Quiet Revolution
SEPTEMBER/OCTOBER 2003 (Vol. 18, No. 5) pp. 2-4
1541-1672/03/$31.00 © 2003 IEEE

Published by the IEEE Computer Society
From the Editor in Chief: The Quiet Revolution
Nigel Shadbolt, University of Southampton
A recurrent theme in my editorials over the past 30 months has been the success story that is artificial intelligence. I've argued that, despite our failure to deliver Stanley Kubrick's HAL or Stephen Spielberg's David, we've
been busy providing firm foundations for intelligent systems. We can see some of these achievements in the May/June IEEE Intelligent Systems, which was devoted to AI's Second Century. In my last editorial, I discussed the likely development of ambient intelligence—a pervasive, ubiquitous computing fabric in which many kinds of routine intelligence will permeate our environments. Light switches that configure themselves to a particular lighting arrangement might seem prosaic; central-heating furnaces that let the engineer know when routine maintenance is due might seem unremarkable. Web services that classify a document against an existing taxonomy and word processors that spot stylistic infelicities might seem mundane. However, these micro-intelligences are the essential and necessary first steps in any widespread deployment of the results of our discipline.
Toward Everyday Intelligence
In 1988, psychologist Don Norman wrote an excellent book called simply The Psychology of Everyday Things. In it he urged students and researchers alike to examine the routine and familiar objects that surround us. He asked them to consider the extent to which these objects displayed interesting psychological phenomena. He used a number of compelling case studies to illustrate that many designed objects violated basic principles of cognitive ergonomics. Take the humble shower—two variables, how fast and how hot. Then consider the myriad taps, faucets, levers, and handles you've encountered that make this basic control a thing of tortuous complexity. Or take the way in which the controls for a stove's burners are projected into a layout that requires you to transform, map, reflect, and project in your mind's eye to determine what controls what.
Norman works in human-computer interfaces too, and software and application designers have adopted many of his ideas. But nowhere near enough people have read his elegant and appealing analyses. Because a routine kitchen appliance can boast a psychological dimension, why not embrace the "intelligence of everyday things"? I would much rather have an interface that learns, from my repeated attempts to save a file in one place, to override its dumb default that puts the file in the place from which the application was last launched. Give me a device that uses a good biometric method to tell me the name of the person I'm talking to, whom I know I've met before. Serious political heavyweights have people whispering such information in their ear all the time. Why should they have all the fun?
More Power to the Computers
Another theme I've alluded to in previous editorials is certainly bringing about a revolution: the continual increase in computational power at our disposal. In "Brute Force and Insight" (Nov./Dec. 2001), I argued that raw power was letting researchers tackle problems and exploit methods that would have been inconceivable a few generations of machine hardware ago. In "Grandly Challenged" (Jan./Feb. 2003), I mentioned that at a recent UK workshop we discussed the idea of building systems to store, index, and manage an individual's experiences over his or her lifetime. This idea has a long history in computing. The Memex machine described in Vannevar Bush's "As We May Think" (July 1945 Atlantic Monthly) is one articulation. Currently, Microsoft's Gordon Bell is busy archiving his life in a project called MyLifeBits. Ted Nelson, who originated the term hypertext, has also been squirreling away his life as audio and video tapes, emails, notes, and documents of every kind.
What's starting to make these digital autobiographical endeavors really exciting is the convergence of computing power and storage capability to the task's requirements. The biblical life span of three score and ten years is approximately 25,550 days of experience, which equates to 613,000 hours or 2.2 billion seconds. Suppose we reserve 100 kbits per second for a compressed audio-video stream. This is pretty impoverished as a record, but it gives us something. Using this benchmark, a lifetime of audiovisual content is 27.5 Tbytes of data. Currently that would require 343 hard drives, each with 80 Gbytes of capacity. If we start the experiment now, in two years' time we'll have six hard drives' worth of data—but in the meantime, storage capacity is at least doubling every 18 months.
Alan Dix has taken this analysis to its ultimate conclusion ("The Ultimate Interface and the Sums of Life?" Interfaces, Spring 2002) and suggests that by the end of an experiment started today, storage capacities will have increased 12 orders of magnitude—a trillion times more capacity stored at ever smaller scales! He estimates that at 100 kbits per second and at 1,000 atoms to store a bit, the curves suggest that by 2073 your life would fit on a grain of sand. Of course, we'll be busy recording more and more in richer and richer representations; I'm sure we'll arrange to keep soaking up memory capacity. Moreover, the opportunities this hardware evolution offers will require techniques and methods that are bound to originate in intelligent systems and AI research. Specifically, this includes the problems of modeling, annotating, linking, and retrieving content.
Dealing With the Possible Consequences
However, science and technology don't exist in a vacuum. Clearly, social issues surround much of what we do. The capabilities we're developing are raising serious ethical dilemmas. The funding we receive depends partly on the political and economic context in which we find ourselves. Public concerns can arise when people perceive threats to personal liberties and freedoms. In "The Shape of Things to Come" (Sept./Oct. 2001), I outlined how the terrible events of 11 September 2001 would change the research landscape in which we work. Nothing new here; as I observed, the requirement for national security and effective military capabilities has been significantly fueling our field for some time. In the two years since that editorial, we've seen ample evidence of these requirements driving the flow of funds.
The programs emerging from these funds have also given rise to public anxieties. In "Someone to Watch over You" (Mar./Apr. 2003), I discussed the technologies that are available or on the horizon that will enable increased levels of surveillance. In this area we've seen recently just how potent the collision of funding and public concern can be. DARPA's Information Awareness Program relaunched its proposal for a Total Information Awareness System as a proposal for a Terrorist Information Awareness System. The original proposal aroused much debate and controversy, receiving a range of critical media reviews. DARPA's LifeLog program ( www.darpa.mil/ipto/programs/lifelog/index.htm) is trying to do something similar to creating a complete individual record of experience. This too has attracted unfavorable comment in some quarters. Problems arise when research programs are constructed in a social or ethical vacuum. The issues involved in any kind of comprehensive or intelligent information surveillance must be considered at the outset. In other influential domains—reproductive biology, genetically modified crops—various countries have instigated powerful overseeing authorities, some of which have determined policy that even precludes some kinds of research. The question arises—do we need equivalent watchdogs and safeguards for the advanced information-processing technologies we're researching?
Where Is Fits in
I don't mean for this editorial to be a metalevel reflection on how prescient your editor in chief is. Rather, I want to point out that IEEE Intelligent Systems is in the happy position of presenting material that really is pervasive and at the leading edge of work that's transforming our world. Our discipline's technologies are being quietly deployed everywhere—on trains, planes, and automobiles; from operating theaters to recording studios; and inside both Microsoft's best-selling applications and the computing infrastructure that IBM, SUN, Oracle, HP, and almost every other large IT corporation sells.
It's this magazine's job to make people aware of this quiet revolution—to make the important technical results clear and accessible, and set them in context. We can do this in a way specialist journals can't. All this brings me to another quiet revolution. At our last editorial board meeting, we learned that our citation rates are high, we have a solid subscriber base, many more people access us through the IEEE Computer Society's Digital Library, and the IS Web site receives hundreds of thousands of hits. I've also learned that Intelligent Systems' impact factor is the second highest of all IEEE CS Publications. This is music to any editor in chief's ears, and the editorial board felt that we should exploit our influence. You'll be hearing of a number of initiatives over the coming months.
Conclusion
Finally, it's my great pleasure to welcome to our editorial board Russ Altman, Subbarao Kambhampati, Enrico Motta, Lynne Parker, and Stefan Staab (see the sidebar for their biographies). A strong editorial board is vital if we're to make the most of our magazine and bring to wider attention the quality of work underway in our community.