Command-line and graphical interfaces controlled by a keyboard and mouse may still be the dominant form of interaction with digital content in the workplace, but consumers are seeking easier-to-use and more intuitive ways to explore the world of online communications and information access.
While the keyboard and mouse are still the predominant means for interacting with computers, other forms of input are emerging.
This evolution is occurring in large part because of recent technology trends: cheaper processing, less expensive and more robust displays of all sizes, and the rapid development of microelectronics, sensors, and actuators. These technological developments have in turn created a voracious consumer appetite for interactive devices.
Command-line and graphical interfaces controlled by a keyboard and mouse may still be the dominant form of interaction with digital content in the workplace, but consumers are seeking easier-to-use and more intuitive interaction and control devices to use in exploring the digital world of online communications and information access.
With the growing availability of touch-enabled smartphones and tablets, touch-based interaction is fast becoming the primary way for users to interact with digital media.1
In the not too distant future, as displays become more robust and manufacturing them becomes less expensive, a wide variety of touch-enabled surfaces—including public displays, tabletops, and walls—will increasingly become a part of our everyday landscape. Further, as explicit gesture recognition is becoming more technically sophisticated, it also offers a familiar way for consumers to interact with digital media as well as to enjoy gaming experiences. In particular, with the release of consumer products such as the Wii Remote, PlayStation Move, and Microsoft Kinect in recent years, the market has become aware of the potential applications for gesture-based interaction.
As mobile phones and other consumer devices incorporate a variety of sensors such as accelerometers, light detectors, and proximity detectors, developers are creating the means for interacting with data and services. In addition to capturing explicit gestures to interact with computers, sensors can reliably detect activities as well as people's body movements and brain state, allowing a shift toward adaptive and responsive interfaces.
Along with devices such as cell phones and tablets, users also can appropriate everyday objects for interaction. Pioneered in the 1990s by researchers such as MIT's Hiroshi Ishii, the field of tangible user interfaces (initially called "graspable user interfaces") and tangible computing has been growing since its introduction.2
Early examples of tangible interaction include work done by Durrell Bishop in 1992 while at the Royal College of Art in London. In Bishop's Marble Answering Machine, marbles represent calls, and a user drops a marble into a dish to play back messages (design.cca.edu/graduate/uploads/pdf/marbleanswers.pdf). In 1995, George Fitzmaurice, Hiroshi Ishii, and William Buxton explored this further in their collaborative Bricks project: users could control virtual objects through physical bricks placed on a display surface, the ActiveDesk.3
Finally, of course, voice is key. Voice recognition technology is steadily improving, offering a commonly used input method for performing everyday work using dictation software as well as for information search, navigation, and device control.
There is a notion that nonkeyboard/mouse interfaces are more "natural." However, this is not always the case. What constitutes natural in one context does not necessarily translate to another.
Early work on technologies to enable interaction beyond the keyboard and mouse focused on the design of components and processes, as well as on the form factor of products. More recently, we have seen a shift toward the design of interactive services, the development of interaction design guidelines and models, and a concern with the user experience. It is increasingly evident to businesses that intuitive and enjoyable interactions with aesthetically pleasing designs can mean success or failure for a new service.
Intrigued by the possibilities that these groundbreaking technologies and devices offer, human-computer interface researchers are exploring new approaches and frameworks to help them understand what input modality is best suited to the interaction required to achieve a given task, on what kind of device, and in what context.
For this special issue, we received submissions covering a variety of areas including interactive surfaces and tabletop computing; mobile computing user interfaces and interaction; tangible interaction and graspable user interfaces; embedded user interfaces and embodied interaction; natural interaction and gestures; and user interfaces based on physiological sensors and actuators.
The accepted articles represent what the reviewers felt were the most ambitious projects, with the authors providing interesting insights into their work and also raising difficult questions, some of which remain unanswered. In addition to offering concrete insights and recommendations, these contributions also will inspire other researchers, designers, and developers to further investigate specific topics.
In "Brain-Computer Interfaces: Beyond Medical Applications," Jan van Erp, Fabien Lotte, and Michael Tangerman identify several nonmedical applications for BCIs, which measure and process the brain's activity and use these signals to control devices and detect context. It is believed that with practice these systems can be quite responsive to individual brain patterns.
BCIs have typically been developed for use in assistive devices. However, as the authors explain, several commercial products will soon be available, among which Mattel's Mindflex
is perhaps the best known. As Figure 1
shows, BCIs have clear potential to revolutionize gaming as well as human-computer interaction, but researchers in this field still face many challenges.
Figure 1. Brain-computer interfaces. Gaming is an important application area for nonmedical BCIs.
In "Novel Interactions on the Keyboard," Hans Gellersen and Florian Block point out the distinct differences between the conventional computer keyboard and touch input displays, with the keyboard being highly evolved for "symbolic" interaction, while touch input displays are visual and dynamic but forgo tactile feedback. While some might see the keyboard as an anachronism, it is an extremely efficient form of input with physical properties—embossed keys, force feedback, and spring thresholds—that offer users considerable tactile feedback, allowing them to type quickly while looking at the screen.
However, it is possible to enhance and extend the traditional keyboard's form by combining it with an overlay to create a physical keyboard with a touch display layer that offers dynamic graphics output on each key. As Figure 2
shows, in the authors' prototype, the keyboard display is an extension of the graphical user interface that is in front of the user on the screen, managing the keyboard as a coherent display space. By making the keyboard another part of the display, the authors show that embodied interaction can leverage the skill sets that many of us already have, thus expanding the repertoire of what we can do. This approach could give users new ways to manipulate digital information.
Figure 2. Silicon keyboard skin. (a) A tabbed keyboard on which the top row of keys act as tabs for switching between different keyboard maps. (b) Selection of a hotkey temporarily overloads the adjacent keys with a menu of related options.
Addressing in-car interactive systems, Andreas Riener describes some of the advances in sensor and display technology that are part of current car dashboards and also introduces some that are forthcoming. In "Gestural Interaction in Vehicular Applications," Riener points out that this emerging design space is destined to grow in the future. As more applications for in-car computers appear, there is an increasing demand for technologies that will enable continuous cursor control on in-car screens, let drivers more intuitively select functions and menu items, and offer an alternative to the keyboard for text input. Indeed, the current proliferation of interactive buttons and switches can seriously distract drivers as they encourage participation in an extremely complex set of activities, including the use of multistage and modal controls, which, in turn, poses a threat to safety.
Focusing on infrastructure-less interfaces such as depth cameras, thermal imagery, and capacitive proximity sensing, the author describes two prototype systems developed by his research group; one uses a capacitive proximity sensing device and the other, shown in Figure 3
, uses an RGB-D camera to detect a more complex range of single finger, multifinger, and whole-hand gestures. The author describes the design process in which he selected the application, developed the interface, and conducted an evaluation.
Figure 3. In-car interactive system. The system uses a depth camera to detect complex gestures, making it possible for the driver to execute multiple tasks simultaneously with one gesture.
In "Multisurface Interaction in the WILD Room," Michel Beaudouin-Lafon and coauthors describe their multi-service collaboration environment, shown in Figure 4
, that is designed for the analysis of large and complex data-sets. The key insight the authors offer is that, rather than distributing information across a variety of surfaces, the WILD platform distributes interaction across those surfaces.
Figure 4. Multisurface interaction. Wall display in the WILD room consists of 32 off-the-shelf 30-inch monitors organized in an 8 × 4 grid.
WILD incorporates four strategies for managing complex scientific data: navigation through visualizations and simulations of hundreds of thousands of data points; comparison of related images; juxtaposition of different forms of data such as research articles and graphics; and communication with remote colleagues.
The authors describe their participatory design approach, offer a detailed description of a design space for multiple kinds of interaction, and introduce a concept in which an instrument mediates interaction between the uwser and the object of interest. For example, pointing instruments can select items, while drag-and-drop instruments can move them. Instruments are independent of the objects upon which they operate, and multiple instruments can be used at one time.
In "Open Sesame: Design Guidelines for Invisible Passwords," Andrea Bianchi, Ian Oakley, and Dong-Soo Kwon discuss novel approaches to nonvisual, haptic PIN (personal identification number) entry systems, like the one shown in Figure 5
. Through empirical study and a literature review, they provide an analysis of different implementations and offer suggestions for the design of new systems and methods.
Figure 5. Nonvisual, haptic PIN entry system. The haptic wheel is an alternative mechanism that communicates structured nonvisual information by counting the number of simple, short, pulse-like stimuli in a temporal sequence.
The authors argue that PIN entry systems tend to be particularly vulnerable to attacks because users can be observed while entering data onto a keypad. While nonvisual PINs are not observable, learning how to use these systems can be challenging. For example, the system might require a user to input data by applying different levels of pressure on different parts of a responsive screen or key-based interface. Although it might be fairly easy to remember graphical gesture points, understanding when and how much pressure to apply in a designated location can be difficult. The authors present an overview of several systems and offer insights into how easy they are to use.
In a contribution to the Invisible Computing column that parallels the theme of this special issue, Gary Marsden and coauthors offer a look at how technology is affecting the developing world. "Making Technology Invisible in the Developing World" describes three projects, two based in India and the other in South Africa, that use innovative interaction technologies to help users gain access to information despite cost and literacy barriers.
The contributions selected for inclusion in this special issue are intended to provide an introduction to some of the most exciting developments in research on interaction with devices beyond the traditional keyboard and mouse. We invite readers to expand on the work described in these articles and explore yet other potential application areas.
is head of the Human-Computer Interaction Group in the Institute for Visualization and Interactive Systems at the University of Stuttgart, Germany. Contact him at email@example.com.
is a principal research scientist at Yahoo Research. Contact her at firstname.lastname@example.org.