This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Saliency-Driven Real-Time Video-to-Tactile Translation
PrePrint
ISSN: 1939-1412
Myongchan Kim, Pohang University of Science and Technology (POSTECH), Pohang
Sungkil Lee, Sungkyunkwan University, Suwon
Seungmoon Choi, Pohang University of Science and Technology (POSTECH), Pohang
Tactile feedback coordinated with visual stimuli has proven its worth in mediating immersive multimodal experiences, yet its authoring has relied on content artists. This article presents a fully automated framework of generating tactile cues from streaming images to provide synchronized visuotactile stimuli in real time. The spatiotemporal features of video images are analyzed on the basis of visual saliency and then mapped into the tactile cues that are rendered on tactors installed on a chair. We also conducted two user experiments for performance evaluation. The first experiment investigated the effects of visuotactile rendering against visual-only rendering, demonstrating that the visuotactile rendering improved the movie watching experience to be more interesting, immersive, and understandable. The second experiment was performed to compare the effectiveness of authoring methods and found that the automated authoring approach, used with care, can produce plausible tactile effects similar in quality to manual authoring.
Index Terms:
Haptic I/O,Artificial,augmented,and virtual realities,H.5.1.d Evaluation/methodology
Citation:
Myongchan Kim, Sungkil Lee, Seungmoon Choi, "Saliency-Driven Real-Time Video-to-Tactile Translation," IEEE Transactions on Haptics, 22 Nov. 2013. IEEE computer Society Digital Library. IEEE Computer Society, <http://doi.ieeecomputersociety.org/10.1109/TOH.2013.58>
Usage of this product signifies your acceptance of the Terms of Use.