Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (1994)
Pacific Grove, CA, USA
Oct. 31, 1994 to Nov. 2, 1994
A.J. Goldschen , Dept. of Electr. Eng. & Comput. Sci., George Washington Univ., Washington, DC, USA
O.N. Garcia , Dept. of Electr. Eng. & Comput. Sci., George Washington Univ., Washington, DC, USA
We describe a continuous optical automatic speech recognizer (OASR) that uses optical information from the oral-cavity shadow of a speaker. The system achieves a 25.3 percent recognition on sentences having a perplexity of 150 without using any syntactic, semantic, acoustic, or contextual guides. We introduce 13, mostly dynamic, oral-cavity features used for optical recognition, present phones that appear optically similar (visemes) for our speaker, and present the recognition results for our hidden Markov models (HMMs) using visemes, trisemes, and generalized trisemes. We conclude that future research is warranted for optical recognition, especially when combined with other input modalities.<
speech recognition, hidden Markov models, optical information processing, speech coding
A. Goldschen, O. Garcia and E. Petajan, "Continuous optical automatic speech recognition by lipreading," Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers(ACSSC), Pacific Grove, CA, USA, 1995, pp. 572-577.