Search For:

Displaying 1-15 out of 15 total
From Emotions to Interpersonal Stances: Multi-level Analysis of Smiling Virtual Characters
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Magalie Ochs,Ken Prepin,Catherine Pelachaud
Issue Date:September 2013
pp. 258-263
In this paper, we explore the emotions and interpersonal stances that the expressions of smile may convey by analyzing the user's perception of smiling embodied conversational agents at different levels: (1) a signal level considering the emotions and stan...
 
A Multimodal Corpus Approach to the Design of Virtual Recruiters
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Mathieu Chollet,Magalie Ochs,Chloe Clavel,Catherine Pelachaud
Issue Date:September 2013
pp. 19-24
This paper presents the analysis of the multimodal behavior of experienced practitioners of job interview coaching, and describes a methodology to specify their behavior in Embodied Conversational Agents acting as virtual recruiters displaying different in...
 
Evaluation of Four Designed Virtual Agent Personalities
Found in: IEEE Transactions on Affective Computing
By Margaret McRorie,Ian Sneddon,Gary McKeown,Elisabetta Bevacqua,Etienne de Sevin,Catherine Pelachaud
Issue Date:July 2012
pp. 311-322
Convincing conversational agents require a coherent set of behavioral responses that can be interpreted by a human observer as indicative of a personality. This paper discusses the continued development and subsequent evaluation of virtual agents based on ...
 
Expressive MPEG-4 Facial Animation Using Quadratic Deformation Models
Found in: Computer Graphics, Imaging and Visualization, International Conference on
By Mohammad Obaid, Ramakrishnan Mukundan, Mark Billinghurst, Catherine Pelachaud
Issue Date:August 2010
pp. 9-14
In this paper we propose an approach compliant with the MPEG-4 standard to synthesize and control facial expressions generated using 3D facial models. This is achieved by establishing the MPEG-4 facial animation standard conformity with the quadratic defor...
 
Guest Editors' Introduction: Digital Human Faces: From Creation to Emotion
Found in: IEEE Computer Graphics and Applications
By Catherine Pelachaud, Tamy Boubekeur
Issue Date:July 2010
pp. 18-19
This special issue presents five articles covering a variety of computer graphics and embodied-conversational-agent applications related to digital human faces.
 
Influences and Embodied Conversational Agents
Found in: Autonomous Agents and Multiagent Systems, International Joint Conference on
By Vincent Maya, Myriam Lamolle, Catherine Pelachaud
Issue Date:July 2004
pp. 1306-1307
We aim at creating an Embodied Conversational Agent (ECA) that would exhibit not only a consistent behavior with her personality and contextual environment factors but also that would be defined as an individual and not as a generic agent. The behavior of ...
   
Towards a Simulation of Conversations with Expressive Embodied Speakers and Listeners
Found in: Computer Animation and Social Agents, International Conference on
By Thomas Rist, Markus Schmitt, Catherine Pelachaud, Massimo Bilvi
Issue Date:May 2003
pp. 5
In this paper we present some results to model complex interactions among virtual characters that participate in negotiation dialogues as well as our work related to a gaze model that controls the eye behavior of several agents conversing with each other. ...
 
Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis
Found in: Computer Animation
By Björn Hartmann, Maurizio Mancini, Catherine Pelachaud
Issue Date:June 2002
pp. 111
This paper introduces Gesture Engine, an animation system that synthesizes human gesturing behaviors from augmented conversation transcripts using a database of high-level gesture definitions. An abstract scripting language to specify hand-arm gestures is ...
 
A multimodal fuzzy inference system using a continuous facial expression representation for emotion detection
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Catherine Pelachaud, Catherine Soladié, Hanan Salam, Nicolas Stoiber, Renaud Séguier
Issue Date:October 2012
pp. 493-500
This paper presents a multimodal fuzzy inference system for emotion detection. The system extracts and merges visual, acoustic and context relevant features. The experiments have been performed as part of the AVEC 2012 challenge. Facial expressions play an...
     
Towards a smiling ECA: studies on mimicry, timing and types of smiles
Found in: Proceedings of the 2nd international workshop on Social signal processing (SSPW '10)
By Catherine Pelachaud, Elisabetta Bevacqua, Ken Prepin, Magalie Ochs, Radoslaw Niewiadomski
Issue Date:October 2010
pp. 65-70
Smile is one of the most often used nonverbal signals. Depending on when, how and where it is displayed, it may convey various meanings. We believe that introducing the variety of smiles may improve the communicative skills of embodied conversational agent...
     
Multimodal expressive embodied conversational agents
Found in: Proceedings of the 13th annual ACM international conference on Multimedia (MULTIMEDIA '05)
By Catherine Pelachaud
Issue Date:November 2005
pp. 683-689
In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using the taxonomy of communicative functions devel...
     
Multimodal user interfaces for a travel assistant
Found in: Proceedings of the 15th French-speaking conference on human-computer interaction on 15eme Conference Francophone sur l'Interaction Homme-Machine (IHM 2003)
By Alain Goye, Catherine Pelachaud, Eric Lecolinet, Gerard Chollet, Shiuan-Sung Lin, Xiaoqing Ding, Yang Ni
Issue Date:November 2003
pp. 244-247
As a part of a project to develop a personal assistant for travellers, we have studied three types of multimodal interfaces for a PDA: 1) a combination of Control menus and vocal inputs to control zoomable user interfaces to graphical or textual databases,...
     
Embodied contextual agent in information delivering application
Found in: Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2 (AAMAS '02)
By Berardina De Carolis, Catherine Pelachaud, Fiorella de Rosis, Isabella Poggi, Valeria Carofiglio
Issue Date:July 2002
pp. 758-765
We aim at building a new human-computer interface for Information Delivering applications: the conversational agent that we have developed is a multimodal believable agent able to converse with the User by exhibiting a synchronized and coherent verbal and ...
     
A reflexive, not impulsive agent
Found in: Proceedings of the fifth international conference on Autonomous agents (AGENTS '01)
By Berardina DeCarolis, Catherine Pelachaud, Fiorella de Rosis, Isabella Poggi
Issue Date:May 2001
pp. 186-187
The aim of our present research is to build an Agent capable of communicative and expressive behavior. The Agent should be able to express its emotions but also to refrain from expressing them: a reflexive, not an impulsive Agent. A Reflexive Agent is an a...
     
Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents
Found in: Proceedings of the 21st annual conference on Computer graphics and interactive techniques (SIGGRAPH '94)
By Brett Achorn, Brett Douville, Catherine Pelachaud, Justine Cassell, Mark Steedman, Matthew Stone, Norman Badler, Scott Prevost, Tripp Becket
Issue Date:July 1994
pp. 413-420
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue...
     
 1