Using Haptic and Auditory Interaction Tools to Engage Students with Visual Impairments in Robot Programming Activities
1. Writing the program based on the robot command set (library),
2. Compiling the program,
3. Downloading the code onto the robot,
4. Running the code, and
5. Adapting the program based on evaluation of the robot actions.
Hypothesis 1: Existing computer accessibility technology (e.g., computer screen readers and magnifiers) can be modified and integrated to provide sufficient feedback for students with visual impairments to enable the programming process.
Hypothesis 2: Correlating haptic and/or audio feedback with real-time program execution can provide sufficient feedback to enable students with visual impairments to visualize their programmed robot sequences.
Hypothesis 3: Enabling automated verbal feedback to summarize program output after completion can provide sufficient feedback to enable students with visual impairments to understand changes that maybe required in their program.
1. Travel distance feedback: Feedback is triggered every few centimeters while the robot is moving in a forward direction. For our experiments, it is set to 10 cm, which is compatible with the size of the robot.
2. Turning left/right feedback: Different but symmetric signals are designed to provide feedback on robot turn status in fixed degrees. In this experiment, feedback is triggered every 45 degrees based on average robot speed and accuracy of odometry calculation.
3. Object distance feedback: An ultrasonic sensor, attached to the front of the NXT robot, detects obstacles between approximately 5 and 50 cm. Feedback is generated in fixed distance increments.
4. Bump feedback: When the robot collides with an obstacle, the mechanical system of the NXT robot triggers an exception. Feedback is associated with this condition in real time to provide the user immediate information about collisions.
5. Goal feedback: When the robot has reached a goal position, the goal event (triggered by the light sensor) is activated. This feedback information is also provided in real time to inform the user that the robot has successfully reached its final destination.
A.M. Howard and C.H. Park are with the HumAnS Laboratory, Georgia Institute of Technology, TSRB 444, 85 5th Street NW, Altanta, GA 30308. E-mail: firstname.lastname@example.org, email@example.com.
S. Remy is with the Computer Science and Engineering Department, University of Notre Dame, 356D Fitzpatrick Hall, Notre Dame, Indiana 46556. E-mail: firstname.lastname@example.org.
Manuscript received 5 Oct. 2010; revised 12 Feb. 2011; accepted 11 Sept. 2011; published online 5 Dec. 2011.
For information on obtaining reprints of this article, please send e-mail to: email@example.com, and reference IEEECS Log Number TLT-2010-10-0115.
Digital Object Identifier no. 10.1109/TLT.2011.28.
1. An estimated 3.5 million Americans have low vision. Out of that group, approximately one million meet the legal criteria for blindness [ 18].
2. Haptic feedback is commonly represented by tactile and/or force feedback. In most haptic studies, tactile feedback is created via heat, pressure, and/or vibrations [ 28].
Ayanna M. Howard is an associate professor at the Georgia Institute of Technology. Her area of research is centered around the concept of humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems. This work, which addresses issues of autonomous control as well as aspects of interaction with humans and the surrounding environment, has resulted in more than 100 peer-reviewed publications in a number of projects—from scientific rover navigation in glacier environments to assistive robots for the home. To date, her unique accomplishments have been documented in more than 12 featured articles—including being named as one of the world's top young innovators of 2003 by the prestigious MIT Technology Review journal and in TIME Magazine's “Rise of the Machines” article in 2004. She received the IEEE Early Career Award in Robotics and Automation in 2005 and is a senior member of the IEEE.
Chung Hyuk Park received the BS degree in electrical and computer engineering and the MS degree in electrical engineering and computer science from Seoul National University, Korea, in 2000 and 2002, respectively. He is currently a graduate research assistant and working toward the PhD degree at the Human Automation Systems Laboratory (HumAnS Lab.) in the Department of Electrical and Computer Engineering at the Georgia Institute of Technology. His research interests include robotics and intelligent systems, computer vision, human-robot interaction, haptic/multimodal systems, and assistive robotics. He is a member of the IEEE.
Sekou Remy received the PhD degree at the Georgia Institute of Technology's School of Electrical and Computer Engineering, where he was a member of the Human Automation Systems Laboratory. He is currently a Moreau postdoctoral fellow at the University of Notre Dame. Prior to Georgia Tech, he attended Morehouse College where he studied computer science and electrical engineering as a participant in the AUC Dual Degree Engineering Program. He is a member of the IEEE.