Issue No.01 - January/February (2004 vol.24)
Published by the IEEE Computer Society
Joshua Strickon , Apple Computer
The field of computer graphics has matured considerably over the past decade. Photorealistic animation and physically modeled simulations run on low-cost graphics cards in PCs with commodity software, and jobs that used to take days to run on a high-end machine, can now be rendered in real time on an augmented PC. Accordingly, the computer graphics research community has evolved out to the fringes—for example, autonomous actors, parameter extraction from images and data, and rendering details so fine that they're barely noticeable.
Interaction, however, is another story. Beyond the keyboard and mouse control of today's dominant GUIs, there are no established and more appropriate techniques for interacting with graphics and mixed media. Researchers in fields like human-computer interfaces, virtual reality, and interactive art have defined and approached interaction in many different ways, but the field hasn't yet converged to a standard set of tools beyond the most basic available on any computer. The technologies of transduction are also following Moore's law, provoking an explosion in the various ways input and output can be coupled to and interpreted by a computer. Interaction with graphical and multimedia systems has thus continued to be an active frontier that spans many fields of application, attracting approaches that are occasionally visionary, often inventive, and sometimes crazy, but that always offer a wild and stimulating intellectual ride.
Siggraph, the premier conference on computer graphics and interactive techniques, has just celebrated its 30th anniversary. Siggraph's Emerging Technologies (E-Tech) exhibition is unique, as it is the world's central venue for experiencing interactive technologies. Other well-known events that showcase interactivity—such as Ars Electronica and Imagina—are fundamentally about art, and while they offer an opportunity to experience the installations and offer provocative seminars, their emphasis isn't on technology and research. In contrast, there are several technical conferences that cover interaction, such as the ACM's Conference on Computer—Human Interaction (CHI), the Symposium on User Interface Software and Technology (UIST), and the Symposium on Designing Interactive Systems (DIS) as well as the IEEE's International Conference on Multimedia and Expo. Although these offer limited demonstration sessions, they focus on technical presentations. None go to the extent that E-Tech does in bringing large installation demonstrations to a wide audience that crosses between the worlds of academia, industry, and art. Projects are displayed in an interactive museum space setup—E-Tech is all about trying things out, and experiencing a demo firsthand is the most direct way to appreciate it.
E-Tech was first presented in 1991; 12 years later, present-day interactive techniques are perhaps where computer graphics were 20 years ago. In 2002, John Fujii of Hewlett-Packard mapped out the history of E-Tech as a means of documenting the contributions made over the years (see http://www.siggraph.org/~fujii/etech/history.html). He derived 28 different categories that group the projects, ranging from autonomous characters to telepresence, then created a visualization showing how the different projects crossed topics. Understanding this past can help us define emerging fields and forge ahead in the future.
The 2003 E-Tech program built atop that past by providing opportunities for contributors to formally present and publish their work. Hailing from across the world, researchers gathered to show 21 jury-selected interactive installations (see http://www.siggraph.org/s2003/conference/etech/index.html). In an effort to expand the program as well as provide a forum for discussion, each presenter also gave a 45-minute talk describing his or her project. E-Tech also touched the cyber frontier by sponsoring a special panel session, "Android Dreams: The Present and Future of Autonomous Robotics." With the formal presentation program tilted much more toward animation and graphics, these sessions brought Siggraph's balance back toward interaction.
This special issue of IEEE CG&A documents some of the installations shown at E-Tech. Through the articles and accompanying CD-ROM material, we hope you can experience a sampling of what more than 24,000 people encountered at the exhibit.
In This Issue
The six articles in this special issue run the gamut of interactive technologies. With an emphasis ranging from computer music to robotics, each of these projects illustrates a novel way to interact with various media (such as tangible systems, audio, and graphics). While not all of the installations feature graphics, they all imply applications that could augment graphical environments.
The first article, by Yasuda et al., describes a simple approach to isolating humans from a projected background scene by using a thermal camera, enabling the dropping of real-time visual avatars into virtual worlds. As long as you are alive (and not in a thermally opaque suit), Thermo-Key can provide the computer with a clean representation of your silhouette.
In the second article, Pachet provides the computer with a degree of musical cognition by designing a system called the Continuator that can automatically synthesize accompaniment by analyzing the player's style with hidden Markov models and can digitally capture their musical soul, so to speak. The Continuator can also operate in a call and response mode, providing a musical conversation between artificial and live players.
Other technologies let users reach into the computer's world. The next article, by Kajimoto et al., enables the user to touch the untouchable. Their SmartTouch exploits distributed electro-tactile stimulation in an attempt to synthesize virtual touch, providing a degree of physicality to a synthetic world.
Blending the physical with the digital, the article by Rosenfeld et al. melds tangible robotics with graphics by designing a table that can track mobile objects. A longstanding problem with tangible interfaces is their inability to implement an undo—that is, once you move a physical set of objects, they can't just walk back to where they came from. This article refutes this premise by providing a set of robotic objects that the user manipulates while the computer simultaneously tracks and moves them. Superimposing graphics on the table, this system provides an immersive and reconfigurable tangible environment.
The article by Farbood et al. describes Hyperscore, a graphical music composition system where users draw complex music with simple scribbling. Because the barriers to entry are quite low, schoolchildren in music classes have used Hyperscore to produce many pieces, some of which have been played by symphony orchestras. The software is also included on the CD-ROM for everyone to try for themselves.
Concluding this issue is an article by Holmquist et al. describing wireless sensor packages that enable the emerging field of ubiquitous computing. This article focuses on applications of Smart-Its, a family of embedded networked sensors developed by a European Union collaboration.
The technology for producing graphical images has outpaced our ability to interact with them. In this issue, we've provided a snapshot through the window of E-Tech that captures future interactive techniques and possibilities as researchers strive toward closing the gap between computer graphics and interaction.
We thank all the authors and reviewers for making this issue possible. We also thank the staff of CG&A for their support and thank the members of this year's E-Tech jury for lending their expertise in evaluating the many submissions that we received. Finally, we acknowledge ACM Siggraph for producing the E-Tech venue.
Joshua Strickon is a senior R&D engineer at Apple Computer and was the Siggraph 2003 Emerging Technologies chair. His research interests include the application of interactive technologies in the fields of entertainment and media. Strickon has a BS and MEng in electrical engineering and computer science and a PhD in media arts and sciences from the Massachusetts Institute of Technology.
Joseph A. Paradiso is a Sony Career Development associate professor at the Massachusetts Institute of Technology Media Lab and directs the Responsive Environments Group. His research interests include new sensor architectures for interactive systems. Paradiso has a BS in electrical engineering and physics from Tufts University and a PhD in physics from MIT. Before coming to the Media Lab, he worked at Draper Laboratory in Cambridge and ETH in Zurich on high-energy physics detectors, spacecraft control systems, and underwater sonar, and has long been active in the electronic music community as a designer of synthesizers and musical interfaces.