The Community for Technology Leaders
Green Image
ABSTRACT
Recent progress in 3D immersive display and virtual reality (VR) technologies has made possible many exciting applications. To fully exploit this potential requires "natural" interfaces that allow manipulating such displays without cumbersome attachments. In this article we describe using visual hand-gesture analysis and speech recognition for developing a speech/gesture interface to control a 3D display. The interface enhances an existing application, VMD, which is a VR visual computing environment for structural biology. The free-hand gestures manipulate the 3D graphical display, together with a set of speech commands. We found
INDEX TERMS
CITATION
Klaus Schulten, Yunxin Zhao, Michael Zeller, Zion Lo, Vladimir I. Pavlovic, Thomas S. Huang, Rajeev Sharma, Stephen Chu, James C. Phillips, "Speech/Gesture Interface to a Visual-Computing Environment", IEEE Computer Graphics and Applications, vol. 20, no. , pp. 29-37, March/April 2000, doi:10.1109/38.824531
98 ms
(Ver )