An Interview with Susan Lederman
OCTOBER-DECEMBER 2010 (Vol. 3, No. 4) pp. 231-233
1939-1412/10/$31.00 © 2010 IEEE

Published by the IEEE Computer Society
An Interview with Susan Lederman
J. Edward Colgate
  Article Contents  
  REFERENCES  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Editor's Note: Over the past three years, Susan Lederman has worked tremendously hard on behalf of ToH, but with her retirement as Associate Editor-in-Chief, I could not resist asking for one last favor. My request was that she share some reflections on her storied career in haptics, and some advice for the next generation of researchers. Susan agreed, leading to the interview below. I hope you enjoy.
I understand that you were inspired by the work of J.J. Gibson to study active touch. Say more.
As an undergraduate, I spent my last summer at a Canadian government human factors lab in Toronto studying human perception. I decided to focus on the sense of touch there because I had learned absolutely nothing about this fascinating modality in any of my previous sensation and perception courses. What I didn't know was that very, very little was known about the tactual modality at that time. I chose to compare haptic and visual spatial processing of position and orientation. By the end of my summer project, I was completely hooked and decided that I would focus on touch when it came time to do my PhD.
That summer I quickly learned that most touch research was conducted by behavioral psychophysicists and peripheral single-unit physiologists, and that both groups seemed to be interested in threshold-level sensation. They used a variety of tools (e.g., needles; cold, warm and rounded probes; hairs of different diameters) to tap, poke, prick and prod the skin of completely stationary ("passive") observers. While I found the work interesting, I also felt it was quite limited in scope. I remember thinking at the time that people also use their sense of touch in far more active ways to manually explore and perceive objects and their properties and to determine the spatial location and movement of these objects in the external world.
J.J. Gibson's book, The Senses Considered as Perceptual Systems [ 1 ], as well as earlier ones by D. Katz [ 2 ] and G. Revesz [ 7 ], were the only ones I could find that emphasized what otherwise appeared notable by its absence in the touch research literature, namely, that people are commonly active when manually learning about the external world and its properties. As Gibson eloquently observed, when people actively touch, they naturally focus their attention on the external world and its concrete properties. This contrasts strongly with the nature of the experiences resulting from when people are passively touched. In the latter case, people generally tend to focus on their internal sensations. With these issues and distinctions in mind, for my PhD, I deliberately chose to examine the perception of surface texture, a task that humans actively perform very often and extremely well.
I should mention that I did take issue with Gibson's complete disinterest in the sensory (and motor) neural mechanisms and processes that underlie both tactile and haptic processing. I am pleased to see that such research is now underway although, understandably, a considerable amount of work remains to be done.
OK, how about that word "haptic?" What does it really mean? Or, put differently, what is the proper way to use it?
For many on the haptics technology and applications side, the term "haptics" has become nearly synonymous with the more general term "touch." However, on the scientific side, back in 1986, Jack Loomis and I emphasized that the "tactile" and "haptic" touch subsystems differ in terms of the sensory inputs available during contact: Whereas the tactile subsystem uses only cutaneous inputs, the haptic subsystem uses a combination of cutaneous and kinesthetic inputs [ 5 ]. Loomis and I further differentiated haptics into "active haptics," involving voluntary exploration by the observer, and "passive haptics," which involves involuntary movement of the observer's limb by some external agent.
In keeping with the distinction between active and passive touch, neuroscientists and behavioral scientists apply stimuli to the skin of a completely passive observer in order to isolate the sole contribution of cutaneous inputs to performance of a given perceptual task. They sometimes include a second touch condition in which the observer engages in completely active haptic exploration. Comparing performance in these two conditions (passive versus active) allows the scientist to determine any additional contribution of observer-controlled kinesthetic inputs due to position and movement of the limbs.
What do you think about the state of haptics interfaces?
I think it is in a really exciting phase! My automated Google search for "haptics" must be coming up with at least 10 (multi-item) results per day!! The potential for using "haptics" in interfaces for virtual-environment and teleoperator applications is enormous, and appears to be growing exponentially. Application domains include, but are far from limited to, animation, product design, virtual medical simulators, and surgical training systems for minimally invasive surgery and, more generally, for telemedicine, dentistry, education (e.g., teaching mechanics), rehabilitation, sport, recreation and entertainment, underwater recovery, commercial deep sea mining, forensic facial reconstruction, e-commerce, and remote communication.
In further considering this question, I should explain that I come at it from a scientific perspective, my background spanning a number of complementary fields that all pertain to the human user (psychophysics, perception, cognition, motor control, cognitive science, neuroscience, and human factors). When I entered the field of haptic interfaces back in its inception, I realized that engineers and computer scientists could benefit enormously from learning about and working with researchers on the human user side. In my opinion, scientific research on human haptics makes two vital contributions to the creation of any effective haptic interface. First, such research can inspire designers to come up with scientifically grounded—and thus, principled— ideas, hypotheses, models, and theories about haptic function and information processing with possible relevance to the effective design of haptic interfaces. Second, the scientific method provides designers with an arsenal of tools such as experimental design and statistical testing that can be used to validly assess the robustness and generality of any haptic interface. What I find most gratifying is that I have seen more and more engineers and computer scientists embrace this approach in which they learn from and work with touch scientists.
There have been clear commercial successes in the field of haptic interfaces. Touch screens (for phones, laptops, netbooks, and tablets) and the use of a vibrating signal in mobile phones have been eagerly embraced, and would seem to be here to stay. However, they do not provide force-feedback as does, for example, the PHANToM. I anticipate that the next generation of haptic interfaces will focus on presenting haptic feedback (i.e., both cutaneous and kinesthetic cues) which will aid the user in remotely perceiving and manipulating both real and virtual environments. To achieve this, I believe active collaboration among engineers, computer scientists, and touch scientists will prove crucial from the first to the final stage of interface design and evaluation.
Neuroscience is playing a bigger role in haptics. What are some of the opportunities for advancing the field?
As I explained earlier, haptic (and tactile) neuroscience is still at an early stage. I believe it will ultimately serve as a valuable component for understanding how haptic inputs are processed and represented. One very interesting finding I would like to mention is that when a monkey used a hand-held tool (i.e., rake) to get food, the tool was incorporated into its body image in tool use-specific areas of the brain. This presumably serves to neurally update and maintain the body representation in order to guide the tool precisely toward the target [ 6 ]. I am intrigued by the possibility that this finding may also prove relevant to the use of haptic interfaces.
Research into the neural basis of unisensory haptic object perception and multisensory integration has expanded in the last 10 to 15 years. Although there has been relatively little work involving the sense of touch to date, I do note that haptic perception of objects (including faces) appears to activate both unimodal and multisensory regions, the latter responding to common inputs from either vision or touch. Such investigations contribute to our understanding of how inputs from multiple sensory systems (touch, vision, and audition) are processed and represented. As such, they may prove useful in the design of multisensory interfaces, which I anticipate will become a very common form of sensory interface.
Finally, I have been intrigued by the possibility that the issue of telepresence/realism may be investigated using a neural imaging approach. To this end, activation patterns produced in a targeted brain area or neural circuit could be compared as the haptic observer perceives either a real or a virtual/illusory environment. To the extent that telepresence is achieved, would the same neural areas be activated? Would the magnitude of activation be comparable?
What are the biggest challenges that face the field today? What are the biggest questions that lie unanswered?
I think that several continuing challenges relate to making our haptic interfaces more seamless (i.e., a perceptual extension of the limb), ultra-reliable, and natural—or at least, easy—to use. Using our thumbs to enlarge a screen or click on an app is highly natural. A point-contact device with a probe-like input device is easy to use for extracting surface texture; moreover, the probe becomes perceptually invisible so that you directly feel the surface beyond its tip. However, such a system is not very effective in either time or accuracy as a means of haptically extracting contour information in either 2D or 3D haptic space. It forces the user to sequentially explore, and therefore imposes considerable cognitive load in terms of sensory integration and memory. I would anticipate that more complex tasks requiring haptic contour extraction will be enhanced by using whole-hand exploration with one, or even two, hands, depending on the scale of the display. Thus, for such higher-level manual tasks, I would strongly advocate the technologically demanding goal of developing multifingered haptic interfaces.
And since manipulating objects in teleoperator and virtual environments involves knowledge of human motor planning and control, I would generally encourage the involvement of more researchers on that side. We also need to understand the contributions of haptically-driven motor control better; to date, most of the human sensory-motor research has focused on visual guidance.
Purely haptic interfaces will likely prove of greatest value to those who are blind or visually impaired. So I heed the medical profession's alert that in the very near future, society should expect a steep rise in visual impairment and blindness due to a marked increase in the incidence of age-related macular degeneration. Haptic interfaces should also prove valuable to the sighted when visual information is unavailable for a variety of reasons, such as performing tasks in murky water or unclean air, low levels of illumination or darkness, and when the operator's hand obstructs the view of the targeted object. At the same time, I suspect that an increasingly common scenario will involve the use of multisensory interfaces in which two (or perhaps three) sensory modalities aid and enhance one another, with each modality performing the functions it does most effectively. Although extremely challenging from engineering, computing, and scientific perspectives, I would advocate the design and development of more multi-sensory interfaces.
What advice do you have for a young person getting started in this field?
Many of the young researchers entering the field of haptics go into engineering and/or computer science disciplines. However, as I mentioned earlier, these fields also intersect with other important disciplines that focus on the human user. To be highly effective, haptic-interface design needs to take into consideration not only the hardware and software, but also the human user. I therefore encourage any researcher interested in the haptic-interface field to learn more about the research methods used to assess the human user's tactile (cutaneous) and haptic (cutaneous and kinesthetic) functions. The earliest research (both behavioral and neural) focused on simple threshold-related phenomena and their underlying peripheral mechanisms. However, the stimuli used for this work are relatively simple to produce and control (e.g., vibration). I predict that as the field matures and tasks performed with haptic interfaces become increasingly more complex, it will become even more important for engineers and computer scientists to learn about and collaborate with touch scientists throughout the entire design/evaluation cycle.
What do you see as the next big directions?
Affective haptics is just emerging as an exciting new thrust in the design of haptic interfaces that can produce, alter, or elicit human emotional communication remotely. I see affective haptics as potentially playing an extremely valuable role in the personal care of and emotional communication with elderly individuals living independently or in assisted-living environments. Enhanced emotional telecontact ("reach out and touch someone") in such situations with distant relatives, friends, and medical personnel can potentially enhance the quality of lives that most often become increasingly lonely and isolated, and that must rely on increasingly poor distance senses, particularly sight or hearing.
Scientific work on haptic affective communication via contact with the arm or the face is a relatively new development. With the assistance of many highly talented colleagues, students, and staff, my research program on haptic face perception has revealed, for example, that people are capable of haptically classifying facial identity and basic expressions of emotion (especially when they dynamically unfold beneath the observer's hands) at levels well above chance using live models, rigid 3D face masks, and 2D raised-line drawings (for a review of this work, see [ 4 ]). Three-dimensional robotic heads with facial features that could be dynamically moved under the user's hands to express different facial emotions (rather than via flat visual monitors) might ultimately be controlled autonomously or teleoperated by a live communicator wearing an external haptic input device on their face.
Another fruitful direction I see beginning to emerge is the use of robust tactile, haptic, and multisensory illusions to alter the user's perception to enhance the manner in which information is displayed. Capitalizing on tactile and haptic illusions may serve as a way of simplifying the design of haptic interfaces, thus rendering them more cost-effective. Recent research with virtual environments (e.g., [ 3 ]) has shown that an illusory haptic experience can even be created by presenting information via a different modality (e.g., vision).
Any final thoughts?
I did want to take this occasion to thank you, Ed, for your impressive leadership as Editor in Chief during the inauguration of the IEEE Transactions on Haptics ( ToH), which has proved to be a most excellent addition to IEEE's stable of technical journals. I would also like to acknowledge Domenico Prattichizzo ( ToH's other Associate Editor in Chief), the Associate Editors, and the ToH reviewers, all of whom have made exceptionally high-quality contributions to this journal. And I very much look forward to what the future might bring to the field of haptic interfaces. Surprise me!!

For information on obtaining reprints of this article, please send e-mail to: toh@computer.org.

REFERENCES