The Community for Technology Leaders
Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (1994)
Pacific Grove, CA, USA
Oct. 31, 1994 to Nov. 2, 1994
ISSN: 1058-6393
ISBN: 0-8186-6405-3
pp: 583-586
B.E. Koster , Dept. of Comput. Sci., North Carolina State Univ., Raleigh, NC, USA
R.D. Rodman , Dept. of Comput. Sci., North Carolina State Univ., Raleigh, NC, USA
D. Bitzer , Dept. of Comput. Sci., North Carolina State Univ., Raleigh, NC, USA
ABSTRACT
The goal of automatic lip-sync (ALS) is to translate speech sounds into mouth shapes. Although this seems related to speech recognition (SR), the direct map from sound to shape avoids many language problems associated with SR and provides a unique domain for error correction. Among other things, ALS animation may be used for animating cartoons realistically and as an aid to the hearing disabled. Currently, a program named Owie performs speaker dependent ALS for vowels.<>
INDEX TERMS
speech recognition, computer animation, error correction, synchronisation, handicapped aids, speech processing
CITATION

B. Koster, R. Rodman and D. Bitzer, "Automated lip-sync: direct translation of speech-sound to mouth-shape," Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers(ACSSC), Pacific Grove, CA, USA, 1995, pp. 583-586.
doi:10.1109/ACSSC.1994.471519
92 ms
(Ver 3.3 (11022016))