The Community for Technology Leaders
Green Image
Issue No. 03 - July-Sept. (2014 vol. 7)
ISSN: 1939-1412
pp: 394-404
Myongchan Kim , Pohang University of Science and Technology (POSTECH), Pohang,
Sungkil Lee , Department of Computer Science and Engineering, Sungkyunkwan University, 2066 Seobu-Ro, Jangan-Gu, Republic of Korea
Seungmoon Choi , Pohang University of Science and Technology (POSTECH), Pohang,
Tactile feedback coordinated with visual stimuli has proven its worth in mediating immersive multimodal experiences, yet its authoring has relied on content artists. This article presents a fully automated framework of generating tactile cues from streaming images to provide synchronized visuotactile stimuli in real time. The spatiotemporal features of video images are analyzed on the basis of visual saliency and then mapped into the tactile cues that are rendered on tactors installed on a chair. We also conducted two user experiments for performance evaluation. The first experiment investigated the effects of visuotactile rendering against visual-only rendering, demonstrating that the visuotactile rendering improved the movie watching experience to be more interesting, immersive, and understandable. The second experiment was performed to compare the effectiveness of authoring methods and found that the automated authoring approach, used with care, can produce plausible tactile effects similar in quality to manual authoring.
Visualization, Spatiotemporal phenomena, Rendering (computer graphics), Haptic interfaces, Streaming media, Motion pictures, Image color analysis

M. Kim, S. Lee and S. Choi, "Saliency-Driven Real-Time Video-to-Tactile Translation," in IEEE Transactions on Haptics, vol. 7, no. 3, pp. 394-404, 2014.
190 ms
(Ver 3.3 (11022016))