Issue No. 03 - July-September (2011 vol. 2)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/T-AFFC.2011.5
R. Niewiadomski , Telecom ParisTech, Paris, France
S. J. Hyniewska , Telecom ParisTech, Paris, France
C. Pelachaud , Telecom ParisTech, Paris, France
Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method are not limited to the face, but are composed of different signals partially ordered in time and belonging to different modalities. We also describe the evaluation of the main features of our approach. We examine the role of multimodality, sequentiality, and constraints in the perception of synthesized emotional states. The results of our evaluation show that applying our algorithm improves the communication of a large spectrum of emotional states, while the believability of the agent animations increases with the use of constraints over the multimodal signals.
virtual reality, computer animation, emotion recognition, graphical user interfaces, human computer interaction, multimodal sequential expression, virtual agents, human users, constraint based model, multimodal emotional displays, emotional expression, synthesized emotional states, agent animation, multimodal signals, Animation, Face recognition, Hidden Markov models, Videos, Games, Emotion recognition, Heuristic algorithms, virtual realities., Graphical user interfaces, artificial, augmented
C. Pelachaud, S. J. Hyniewska and R. Niewiadomski, "Constraint-Based Model for Synthesis of Multimodal Sequential Expressions of Emotions," in IEEE Transactions on Affective Computing, vol. 2, no. , pp. 134-146, 2011.