The Community for Technology Leaders
2010 IEEE Symposium on 3D User Interfaces (3DUI) (2010)
Waltham, MA
Mar. 20, 2010 to Mar. 21, 2010
ISBN: 978-1-4244-6846-1
pp: 103-110
Ryo Fukazawa , Osaka Univ., Suita, Japan
Kazuki Takashima , Osaka Univ., Suita, Japan
Garth Shoemaker , Univ. of British Columbia, Vancouver, BC, Canada
Yoshifumi Kitamura , Osaka Univ., Suita, Japan
Yuichi Itoh , Osaka Univ., Suita, Japan
Fumio Kishino , Osaka Univ., Suita, Japan
ABSTRACT
This paper compares multi-modal interaction techniques in a perspective-corrected multi-display environment (MDE). The performance of multimodal interactions using gestures, eye gaze, and head direction are experimentally examined in an object manipulation task in MDEs and compared with a mouse operated perspective cursor. Experimental results showed that gesture-based multimodal interactions provide performance equivalent in task completion time to mouse-based perspective cursors. A technique utilizing user head direction received positive comments from subjects even though it was not as fast. Based on the experimental results and observations, we discuss the potential of multimodal interaction techniques in MDEs.
INDEX TERMS
gesture-based multimodal interactions, perspective-corrected multidisplay environment, MDE, eye gaze, head direction, object manipulation, mouse operated perspective cursor
CITATION

G. Shoemaker, R. Fukazawa, Y. Kitamura, Y. Itoh, F. Kishino and K. Takashima, "Comparison of multimodal interactions in perspective-corrected multi-display environment," 2010 IEEE Symposium on 3D User Interfaces (3DUI), Waltham, MA, 2010, pp. 103-110.
doi:10.1109/3DUI.2010.5444711
91 ms
(Ver 3.3 (11022016))