, University of Massachusetts Lowell
Pages: pp. 16-17
As robots become more autonomous, some people have speculated that it's only a matter of time before these systems will have little or no need for user interfaces. For example, once an unmanned aerial vehicle (UAV) has received waypoints for a mission, it can fly the complete mission without further human intervention. But such a viewpoint ignores the many reasons why people might need to direct even highly autonomous systems. Emergencies occur, such as an unexpected thunderstorm cell that would require rerouting the UAV. Or serendipitous opportunities might arise, such as interesting video data being streamed from the UAV in real time, resulting in the need to loiter or change waypoints. These brief examples demonstrate that dynamic situations will require human supervisors to vary their level of interaction as conditions change. Even as autonomy and sensing systems improve, humans will still want some level of supervisory control of robots, much the same as generals want over their troops.
This special issue presents five articles and one invited essay on designing and implementing human interaction with autonomous (or semiautonomous) physical robots. In keeping with the human focus, they all discuss evaluation with humans, and most of the evaluations take into account the effects of varying automation levels.
The articles describe systems for four domains: assistive technology, guidance and teaching, remote exploration, and airborne surveillance and combat. While all these systems work synchronously with humans, the humans' proximity to the robots varies widely. Humans sit on robotic wheelchairs, visitors to a science museum stand near humanoid robot guides, and operators of remote-exploration vehicles and UAVs are located far from their robotic partners. The degree of proximity is one factor affecting interaction design.
Brice Rebsamen and his colleagues describe their experiments with a wheelchair that employs electroencephalography (EEG) input to enable the user's brain waves to control the wheelchair's motion. In contrast, Sarangi Parikh and his colleagues' wheelchair combines a joystick with voice command, fingermouse, and vision-based hand recognition. In both cases, users receive feedback through the direct tactile and other sensory information they perceive by feeling and seeing the wheelchair's motion.
Masahiro Shiomi and his colleagues' museum guides use signals from RFID tags worn by museum visitors as well as infrared cameras as their inputs. The robots engage in simple conversations with human visitors and other robots even though they aren't equipped with voice recognition.
Kristen Stubbs, Pamela Hinds, and David Wettergreen's remote autonomous explorer and Mary Cummings, Amy Brzezinski, and John Lee's UAV supervisory control interface both use standard keyboards, mice, and displays. Because operators can't directly view their robots, good information presentation design for the robot's sensed data becomes paramount.
The field most germane to interacting with autonomy is human-robot interaction, a young endeavor that's very much a work in progress. HRI draws heavily on more mature fields such as human-computer interaction. In this issue, Parikh and his colleagues make good use of several HCI evaluation techniques: formal usability testing and the NASA Task Load Index (TLX). 1 Their article reports on performance differences in autonomous, semiautonomous, and manual modes. Rebsamen and his colleagues also performed an initial usability test. Shiomi and his colleagues used a questionnaire to determine users' perceptions and attitudes about their experience with the robots, which have varying levels of capabilities including different types of autonomous behavior.
HRI also draws on psychology and anthropology (chiefly field observations and ethnography, the study of humans in context). Common-ground theory 2 springs from psychology; Stubbs, Hinds, and Wettergreen employed it in their study of scientists working remotely with an autonomous robot in Chile. Common-ground theory applies to dialogue and involves building shared knowledge, beliefs, and suppositions to jointly coordinate the meanings intended by a speaker and the understanding of those meanings. Stubbs and her colleagues used this theory to evaluate the nature of communication problems between robots and people, gathering the data necessary for their evaluation through field observations.
The computer-supported cooperative work community has also influenced HRI because CSCW's focus on technology that facilitates group work is also relevant to technology that facilitates people and robots working together. Cummings, Brzezinski, and Lee describe how they evaluated the performance of human teams supervising the scheduling of multiple UAVs. Different automation levels influenced the types of information provided in displays designed to support human decision making.
HRI is heir to the tradition of questioning the ethics or values involved in using robots. In this issue's Expert Opinion department, our invited essayist, Ben Shneiderman (author of Leonardo's Laptop: Human Needs and the New Computing Technologies), talks about human responsibility in designing and deploying autonomous systems.
The articles in this issue provide a multidisciplinary look at designing and evaluating autonomous systems for human use. We hope they illustrate some of the many interesting research approaches for those who would like to make it easier for humans and robots to work together.