The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - July/August (2010 vol.30)
pp: 18-19
Published by the IEEE Computer Society
Catherine Pelachaud , French National Center for Scientific Research (CNRS)
Tamy Boubekeur , Telecom ParisTech
ABSTRACT
This special issue presents five articles covering a variety of computer graphics and embodied-conversational-agent applications related to digital human faces.
Faces are an important communication vector. Through facial expressions, gaze behaviors, and head movements, faces convey information on not only a person's emotional state and attitude but also discursive, pragmatic, and syntactic elements. The expressions result from subtle muscular contractions and wrinkle formation; we perceive them through the complex filter of subsurface scattering and other nontrivial light reflections.
Lately, there has been much interest in modeling 3D faces and their expressions. Researchers have investigated automatic or interactive generation of 3D geometry, as well as rendering and animation techniques. This research has many applications. One type of application relates to creating and animating virtual actors for films and video games. New rendering techniques ensure highly realistic skin models. Animators apply motion capture with or without markers to animate the body and the face. The quality can be accurate enough to capture a real actor's performances as well as the slightest movements in emotional expressions.
Another type of application involves creation of autonomous agents, particularly embodied conversational agents (ECAs)—entities with communicative and emotional capabilities. ECAs serve as Web assistants, pedagogical agents, or even companions. Researchers have proposed models to specify and control ECA behavior.
One goal of this special issue is to broadly cover domains linked to 3D faces and their creation, rendering, and animation. Moreover, we particularly aim to present excellent research from the computer graphics and ECA communities.
In This Issue
The issue contains five articles illustrating the state of the art in the modeling, rendering, animation, and expression of emotion.
In "The Digital Emily Project: Achieving a Photorealistic Digital Actor," Oleg Alexander and his colleagues describe a state-of-the-art face performance capture system that produces astonishing visual output. Their system combines some of the most recent capture, rigging, and compositing techniques to provide production-quality special effects. Based on the Light Stage 5 system, their method provides high-resolution animated face geometry, together with accurate measures for specular and subsurface albedo and normals. As a result, this system produces photorealistic animated faces that often cross the well-known "uncanny valley."
Real-time applications such as games also need realistic faces to provide an immersive experience and convey characters' emotions. In "Real-Time Realistic Skin Translucency," Jorge Jimenez and his colleagues propose a scalable approximation method for subsurface scattering. This method produces realistic translucency effects at a high frame rate, requiring minimal extra cost compared to conventional rendering. The final rendering quality approaches that of precomputed pictures and points in an exciting way toward photorealistic real-time rendering of natural shapes.
The next article is a contribution to facial-animation control, a critical component of 3D animation production. In "Direct-Manipulation Blendshapes," J.P. Lewis and Ken Anjyo redefine blendshapes for facial animation by letting users directly manipulate shapes and by deducing optimal slider control parameters for accurate tuning. The resulting interaction metaphor is more natural, while being fully compatible with existing slider-based controllers.
"Modeling Short-Term Dynamics and Variability for Realistic Interactive Facial Animation," by Nicolas Stoiber and his colleagues, discusses a real-time animation system that reproduces the dynamism of facial expressions. The system uses motion capture data of expressive facial animations. Rather than considering the face as a whole object, the authors develop several motion models, each controlling a given part of the face. These models are trained on the motion capture data and can learn the dynamic characteristics of the various facial parts. Furthermore, a stochastic component ensures reproduction of the variability of human expressions. The resulting animation looks more natural.
Finally, in "The Expressive Gaze Model: Using Gaze to Express Emotion," Brent Lance and Stacy Marsella present a model encompassing head, torso, and eye movement in a hierarchical fashion. Their Expressive Gaze Model has two main components: a library of Gaze Warping Transformations and a procedural model of eye movement. These components combine motion capture data, procedural animation, physical animation, and even hand-crafted animation. Lance and Marsella conducted an empirical study to determine the mapping between gaze animation models and emotional states.
We hope you enjoy reading these articles as much as we did.
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
Catherine Pelachaud is a French National Center for Scientific Research (CNRS) director of research at the Multimedia group in the Signal and Image Processing department at the CNRS Laboratory for Information Processing and Communication, located at Telecom ParisTech. Her research interests are embodied conversational agents, nonverbal behavior, models of expressive and emotional behavior, and interactive and multimodal systems. Pelachaud has a PhD in computer graphics from the University of Pennsylvania. Contact her at catherine.pelachaud@telecom-paristech.fr.
Tamy Boubekeur is an associate professor of computer science leading the Computer Graphics group in the Signal and Image Processing department at the CNRS Laboratory for Information Processing and Communication, located at Telecom ParisTech. His research interest is 3D computer graphics, particularly geometric modeling and rendering. Boubekeur has a PhD in computer science from the University of Bordeaux. Contact him at tamy.boubekeur@telecom-paristech.fr.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool