2010 IEEE Symposium on 3D User Interfaces (3DUI) (2010)
Mar. 20, 2010 to Mar. 21, 2010
Ryo Fukazawa , Osaka Univ., Suita, Japan
Kazuki Takashima , Osaka Univ., Suita, Japan
Garth Shoemaker , Univ. of British Columbia, Vancouver, BC, Canada
Yoshifumi Kitamura , Osaka Univ., Suita, Japan
Yuichi Itoh , Osaka Univ., Suita, Japan
Fumio Kishino , Osaka Univ., Suita, Japan
This paper compares multi-modal interaction techniques in a perspective-corrected multi-display environment (MDE). The performance of multimodal interactions using gestures, eye gaze, and head direction are experimentally examined in an object manipulation task in MDEs and compared with a mouse operated perspective cursor. Experimental results showed that gesture-based multimodal interactions provide performance equivalent in task completion time to mouse-based perspective cursors. A technique utilizing user head direction received positive comments from subjects even though it was not as fast. Based on the experimental results and observations, we discuss the potential of multimodal interaction techniques in MDEs.
gesture-based multimodal interactions, perspective-corrected multidisplay environment, MDE, eye gaze, head direction, object manipulation, mouse operated perspective cursor
G. Shoemaker, R. Fukazawa, Y. Kitamura, Y. Itoh, F. Kishino and K. Takashima, "Comparison of multimodal interactions in perspective-corrected multi-display environment," 2010 IEEE Symposium on 3D User Interfaces (3DUI), Waltham, MA, 2010, pp. 103-110.