The Community for Technology Leaders
Green Image
Issue No. 04 - Oct.-Dec. (2017 vol. 8)
ISSN: 1949-3045
pp: 546-558
Yu Ding , Department of Computer Science, University of Houston, Texas, United States
Jing Huang , School of Information and Electronic Engineering, Zhejiang Gongshang University, Hangzhou, China
Catherine Pelachaud , ISIR-CNRS, Université Pierre et Marie Curie, Paris, France
ABSTRACT
It has been well documented that laughter is an important communicative and expressive signal in face-to-face conversations. Our work aims at building a laughter behavior controller for a virtual character which is able to generate upper body animations from laughter audio given as input. This controller relies on the tight correlations between laughter audio and body behaviors. A unified continuous-state statistical framework, inspired by Kalman filter, is proposed to learn the correlations between laughter audio and head/torso behavior from a recorded laughter human dataset. Due to the lack of shoulder behavior data in the recorded human dataset, a rule-based method is defined to model the correlation between laughter audio and shoulder behavior. In the synthesis step, these characterized correlations are rendered in the animation of a virtual character. To validate our controller, a subjective evaluation is conducted where participants viewed the videos of a laughing virtual character. It compares the animations of a virtual character using our controller and a state of the art method. The evaluation results show that the laughter animations computed with our controller are perceived as more natural, expressing amusement more freely and appearing more authentic than with the state of the art method.
INDEX TERMS
Hidden Markov models, Animation, Torso, Correlation, Mouth, Lips, Speech
CITATION

Y. Ding, J. Huang and C. Pelachaud, "Audio-Driven Laughter Behavior Controller," in IEEE Transactions on Affective Computing, vol. 8, no. 4, pp. 546-558, 2017.
doi:10.1109/TAFFC.2017.2754365
610 ms
(Ver 3.3 (11022016))