The Community for Technology Leaders
Green Image
Issue No. 03 - May/June (2005 vol. 11)
ISSN: 1077-2626
pp: 341-352
Richard E. Parent , IEEE Computer Society
We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.
Facial animation, speech synchronization, lip synchronization, animation, visual speech synthesis, coarticulation, facial modeling.

R. E. Parent and S. A. King, "Creating Speech-Synchronized Animation," in IEEE Transactions on Visualization & Computer Graphics, vol. 11, no. , pp. 341-352, 2005.
82 ms
(Ver 3.3 (11022016))