The Community for Technology Leaders
2010 43rd Hawaii International Conference on System Sciences (2010)
Koloa, Kauai, Hawaii
Jan. 5, 2010 to Jan. 8, 2010
ISBN: 978-0-7695-3869-3
pp: 1-10
Reliable high-level fusion of several input modalities is hard to achieve, and(semi-)automatically generating it is even more difficult. However, it is important to address in order to broaden the scope of providing user interfaces semi-automatically.Our approach starts from a high-level discourse model created by a human interaction designer. It is modality independent, so an annotated discourse is semiautomatically generated, which influences the fusion mechanism. Our high-level fusion checks hypotheses from the various input modalities by use of finite state machines. These are modality independent, and they are automatically generated from the given discourse model. Taking all this together, our approach provides semi-automatic generation of high-level fusion. It currently supports input modalities graphical user interface (simple) speech, a few hand gestures, and a bar code reader.

S. Kavaldjian, J. Falb, H. Kaindl and D. Ertl, "Semi-Automatically Generated High-Level Fusion for Multimodal User Interfaces," 2010 43rd Hawaii International Conference on System Sciences(HICSS), Koloa, Kauai, Hawaii, 1899, pp. 1-10.
88 ms
(Ver 3.3 (11022016))