Computer Animation (1996)
June 3, 1996 to June 4, 1996
Irfan Essa , Perceptual Computing Section, The Media Laboratory, Massachusetts Institute of Technology
Sumit Basu , Perceptual Computing Section, The Media Laboratory, Massachusetts Institute of Technology
Trevor Darrell , Perceptual Computing Section, The Media Laboratory, Massachusetts Institute of Technology
Alex Pentland , Perceptual Computing Section, The Media Laboratory, Massachusetts Institute of Technology
We describe tools that use measurements from video for the extraction of facial modeling and animation parameters, head tracking, and real-time interactive facial animation. These tools share common goals but rely on varying details of physical and geometric modeling and in their input measurement system. Accurate facial modeling involves fine details of geometry and muscle coarticulation. By coupling pixel-by-pixel measurements of surface motion to a physically-based face model and a muscle control model, we have been able to obtain detailed spatio-temporal records of both the displacement of each point on the facial surface and the muscle control required to produce the observed facial motion. We will discuss the importance of this visually extracted representation in terms or realistic facial motion synthesis. A similar method that uses an ellipsoidal model of the head coupled with detailed estimates of visual motion allows accurate tracking of head motion in 3-D. Additionally, by coupling sparse, fast visual measurements with our physically-based model via an interpolation process, we have produced a real-time interactive facial animation/mimicking system.
Facial Modeling, Facial Animation, Interactive Animation, Expressions and Gestures, Computer Vision
T. Darrell, S. Basu, A. Pentland and I. Essa, "Modeling, Tracking and Interactive Animation of Faces and Heads Using Input from Video," Computer Animation(CA), Geneva, Switzerland, 1996, pp. 68.