The Community for Technology Leaders
Computer Vision, IEEE International Conference on (2005)
Beijing, China
Oct. 17, 2005 to Oct. 20, 2005
ISSN: 1550-5499
ISBN: 0-7695-2334-X
pp: 1424-1431
Trevor Darrell , Massachusetts Institute of Technology
James Glass , Massachusetts Institute of Technology
Kevin Wilson, , Massachusetts Institute of Technology
Karen Livescu , Massachusetts Institute of Technology
Michael Siracusa , Massachusetts Institute of Technology
Kate Saenko , Massachusetts Institute of Technology
We present an approach to detecting and recognizing spoken isolated phrases based solely on visual input. We adopt an architecture that first employs discriminative detection of visual speech and articulatory features, and then performs recognition using a model that accounts for the loose synchronization of the feature streams. Discriminative classifiers detect the subclass of lip appearance corresponding to the presence of speech, and further decompose it into features corresponding to the physical components of articulatory production. These components often evolve in a semi-independent fashion, and conventional viseme-based approaches to recognition fail to capture the resulting co-articulation effects. We present a novel dynamic Bayesian network with a multi-stream structure and observations consisting of articulatory feature classifier scores, which can model varying degrees of co-articulation in a principled way. We evaluate our visual-only recognition system on a command utterance task. We show comparative results on lip detection and speech/nonspeech classification, as well as recognition performance against several baseline systems.
Trevor Darrell, James Glass, Kevin Wilson,, Karen Livescu, Michael Siracusa, Kate Saenko, "Visual Speech Recognition with Loosely Synchronized Feature Streams", Computer Vision, IEEE International Conference on, vol. 02, no. , pp. 1424-1431, 2005, doi:10.1109/ICCV.2005.251
88 ms
(Ver 3.3 (11022016))