Search For:

Displaying 1-5 out of 5 total
Expressive avatars in MPEG-4
Found in: Multimedia and Expo, IEEE International Conference on
By M. Mancini, B. Hartmann, C. Pelachaud, A. Raouzaiou, K. Karpouzis
Issue Date:July 2005
pp. 4 pp.
Man-machine interaction (MMI) systems that utilize multimodal information about users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. A lifelike avatar can enhance interact...
 
Building Autonomous Sensitive Artificial Listeners
Found in: IEEE Transactions on Affective Computing
By M. Schroder,E. Bevacqua,R. Cowie,F. Eyben,H. Gunes,D. Heylen,M. ter Maat,G. McKeown,S. Pammi,M. Pantic,C. Pelachaud,B. Schuller,E. de Sevin,M. Valstar,M. Wollmer
Issue Date:April 2012
pp. 165-183
This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and nonverbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving an...
 
Constraint-Based Model for Synthesis of Multimodal Sequential Expressions of Emotions
Found in: IEEE Transactions on Affective Computing
By R. Niewiadomski,S. J. Hyniewska,C. Pelachaud
Issue Date:July 2011
pp. 134-146
Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method...
 
Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing
Found in: IEEE Transactions on Affective Computing
By Alessandro Vinciarelli,M. Pantic,D. Heylen,C. Pelachaud,I. Poggi,F. D'Errico,M. Schroeder
Issue Date:January 2012
pp. 69-87
Social Signal Processing is the research domain aimed at bridging the social intelligence gap between humans and machines. This paper is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesi...
 
Design and evaluation of expressive gesture synthesis for embodied conversational agents
Found in: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (AAMAS '05)
By B. Hartmann, C. Pelachaud, M. Mancini, S. Buisine
Issue Date:July 2005
pp. 1095-1096
To increase the believability and life-likeness of Embodied Conversational Agents (ECAs), we introduce a behavior synthesis technique for the generation of expressive gesturing. A small set of dimensions of expressivity is used to characterize individual v...
     
 1