Pages: pp. 193-195
The history of engineering has many examples of machines that solve human problems in a distinctly nonhuman way, and much has been learned from these examples. However, human-robot interaction is a field where it seems that imitating and understanding biological systems may be of particular interest.
This special issue has its origins in a tournament announced at the Seventh Annual Computational Motor Control Workshop, held on June 2011 at Ben-Gurion University, to compare algorithms for handshake generation and classification. This tournament realized the idea of a Turing-like handshake test (reviewed in the paper by Avraham et al.) which suggested that our ability to simulate human-like motor interaction could be measured by how well that interaction deceives a human observer into believing the interaction was with a real human. The idea of the tournament was for algorithms to compete in their ability to produce handshakes that are similar to those of humans. Human judges were asked to distinguish between human and robotic handshakes and artificial agents were asked to make similar judgments.
Our special issue kicks off with “Toward Perceiving Robots as Humans: Three Handshake Models Face the Turing-Like Handshake Test” by Guy Avraham, Ilana Nisky, Hugo L. Fernandes, Daniel E. Acuna, Konrad P. Kording, Gerald E. Loeb, and Amir Karniel, which presents three handshake algorithms, their performance, and the implications this has for the current state-of-the-art in simulating human-likeness. Beyond the specific details of each handshake model, the handshake framework introduces some general questions about interactions via touch, including the gap between perception and action, co-adaptation, safety, intuitiveness, and human-likeness. Addressing these questions will be critical for future robotic systems that are supposed to interact directly with humans, assist them in performing physical tasks, enhance motor training and rehabilitation, and even interact socially such as when shaking hands or dancing.
A total of seven other manuscripts were accepted for this special issue targeting research topics of haptic shared control, guidance, and negotiation using kinesthetic and tactile feedback in human-human, human-agent, and human-robot interaction with applications to motor learning, rehabilitation, social interaction, as well as human-computer interaction.
Dane Powell and Marcia O'Malley investigated and compared different haptic shared-control guidance paradigms for motor learning in “The Task-Dependent Efficacy of Shared-Control Haptic Guidance Paradigms.” They introduce a taxonomy spanning assistance/resistance, confounding of task and assistance forces, and adjustment of assistance. They propose a novel shared-control proxy algorithm that supports stable implementation of a wide range of guidance strategies covered by the taxonomy. Four different guidance paradigms are implemented and used to train subjects in two dynamic tasks. Results of a user study confirm the “guidance hypothesis,” showing that challenge is essential for motor learning, but also highlighting the strong task-dependency of shared-control guidance techniques.
Samuel McAmis and Kyle Reed employed a bimanual haptic shared control guidance scheme that separated task and guidance forces in “Simultaneous Perception of Forces and Motions Using Bimanual Interactions.” They explored how humans used guidance information applied to one hand to imitate paths with the other hand that simultaneously experienced task-related forces. In this context, they explored trajectories that can be effectively transferred by guidance of one hand to movements of the other by using different reference frames and guiding stiffnesses as well as delays of the haptic task force. The authors found that subjects who explored a rod with one hand while that hand was being guided by passive movement of the other could perceive its orientation just as well as subjects engaged in active determination of the rod's angle. Such findings are important for understanding the exchanges of information between the cerebral hemispheres and for the design of rehabilitation robots used to encourage cortical plasticity and relearning in patients who have suffered unilateral damage such as from stroke.
Using a specially designed wrist-robot device, Lorenzo Masia, Valentina Squeri, Etienne Burdet, Giulio Sandini, and Pietro Morasso investigated the role of haptic feedback in influencing different coordination strategies between multiple degrees of freedom in a simple dynamic motor learning task in “Wrist Coordination in a Kinematically Redundant Stabilization Task.” Subjects were asked to stabilize a one degree-of-freedom inverted pendulum using two degrees-of-freedom of their arm posture (wrist and elbow). The authors found that subjects select the degrees-of-freedom depending upon the task's dynamical properties and upon the haptic feedback that is made available to them. This finding illustrates the importance of adequately designed haptic feedback and its potential to influence coordination strategies such as those targeted by sensorimotor rehabilitation treatments.
Haptic feedback has most often involved signals intended for the proprioceptors of the operator, i.e., force and position. Such information is relatively easy to obtain from transducers in a slave robot and to present via motors in the master controller. Dexterous manipulation of objects depends heavily on tactile feedback, well-known to anyone whose fingers have become numb from the cold. The rapidly evolving technology of tactors—haptic display devices targeting the operator's cutaneous receptors—is featured in two papers in this issue. In “Evaluation of Tactile Feedback Methods for Wrist Rotation Guidance,” Andrew Stanley and Katherine Kuchenbecker present the design of five different tactile devices for the wrist combined with two types of drive algorithms. They investigated different combinations of devices and algorithms in terms of their effectiveness for tasks requiring directional response, position targeting, and trajectory following. Their results show that the optimal combination of actuator and drive algorithm is highly task-specific but must generally include adequate cues for both movement direction and magnitude. In “A High Performance Tactile Feedback Display and Its Integration in Teleoperation,” Ioannis Sarakoglou, Nadia Garcia-Hernandez, Nikos Tsagarakis, and Darwin Caldwell present a new and enhanced version of a tactor array for the fingertips, first introduced as the Optacon for the blind in the 1960s (“Optacon,” Wikipedia). They demonstrate the use of vibratory tactile feedback to enhance performance of an edge-tracking task using a telerobot.
Telerobots physically separate the master controller that is operated by a human from the slave robot that interacts with the environment. Hongbo Wang and Kazuhiro Kosuge studied a more intimate interaction in which a robotic female dancer was guided by its direct interaction with a human dance partner in “Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.” Their approach is based on modeling each dance partner as an inverted pendulum and then predicting and minimizing the interaction forces by the robot taking the appropriate follower step. Results show the advantage of the more human-like pendulum model over classical admittance controllers and the importance of considering the whole physical coupled human-robot system for an accurate prediction of the current user state.
Finally, in “Supporting Negotiation Behavior with Haptics-Enabled Human-Computer Interfaces,” S. Ozgur Oguz, Ayse Kucukyilmaz, Tevfik Metin Sezgin, and Cagatay Basdogan investigate the ability of haptic feedback to support negotiations in human-computer interaction tasks using tools from game theory. They developed a game in which human and computer cooperate in performing a task and demonstrated that subjects were more successful in differentiating the preprogrammed computer negotiation behavior when haptic cues were displayed in addition to visual cues. This study suggests that it is useful to introduce haptics into human-computer interaction tasks, particularly those that permit multiple strategies that must be negotiated between the participants.
A central theme in all of the papers in this special issue is the bidirectional interaction between sensory information and motor behavior. As David Katz pointed out almost a century ago ( The World of Touch, 1925), the somatosensory system differs from other exteroceptive senses in that the information that it provides is inextricably connected to the movements made to obtain it. This leads to a circularity that complicates experimental studies and their interpretation —each movement is continuously modified by the sensory feedback but the meaning of the sensory information depends on the movement. Robots provide both the need and a means to tease this problem apart because they are themselves capable of at least some human-like complexity. When humans interact with robots, they must use more of their human wiles to succeed. When robots interact with humans, their algorithms should reflect at least some of those wiles.
We wish to thank the editorial and administrative staff of the journal and, most importantly, the numerous anonymous reviewers who participated in the effort of screening the submissions for this special issue. Submissions in which any of the guest editors was an author were reviewed separately by anonymous associate editor(s) selected by the editor-in-chief. The peer-review system contributed significantly to the quality of the papers; each was revised at least once according to the reviewers' suggestions. Unfortunately, some fine submissions could not be revised before our special issue deadline; we look forward to their appearance in future publications. We hope that the articles presented here will advance research on haptics, sensorimotor neuroscience, and human-robot interactions.
Ferdinando A. Mussa-Ivaldi
Gerald E. Loeb