Issue No. 03 - May/June (2006 vol. 26)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MCG.2006.66
Mathias Koelsch , Naval Postgraduate School
Ryan Bane , Microsoft Corporation
Tobias Hoellerer , University of California, Santa Barbara
Matthew Turk , University of California, Santa Barbara
Wearable computers and their novel applications demand more context-specific user interfaces than traditional desktop paradigms can offer. This article describes a multimodal interface and explains how it enhances a mobile user's situational awareness and how it provides new functionality. This mobile, augmented-reality system visualizes otherwise invisible information encountered in urban environments. A versatile filtering tool allows interactive display of occluded infrastructure and of dense data distributions, such as room temperature or wireless network strength, with applications for building maintenance, emergency response, and reconnaissance missions. To control this complex application functionality in the real world, the authors combine multiple input modalities--vision-based hand gesture recognition, a 1D tool, and speech recognition--with three late integration styles to provide intuitive and effective input means. The system is demonstrated in a realistic indoor and outdoor task environment, and preliminary user experiences are described. The authors postulate that novel interaction metaphors must be developed together with user interfaces that are capable of controlling them.
augmented reality, wearable computing, multimodal interface, hand-gesture recognition, information visualization
M. Turk, M. Koelsch, T. Hoellerer and R. Bane, "Multimodal Interaction with a Wearable Augmented Reality System," in IEEE Computer Graphics and Applications, vol. 26, no. , pp. 62-71, 2006.