This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
2011 Third International Conference on Knowledge and Systems Engineering
Audiovisual Affect Recognition in Spontaneous Filipino Laughter
Hanoi, Vietnam
October 14-October 17
ISBN: 978-0-7695-4567-7
Laughter has been determined as an important social signal that can predict emotional information of users. This paper presents an extension of a previous study that discovers underlying affect in Filipino laughter using audio features, a posed laughter database and categorical labels. For this study, analysis of visual (facial points) and audio (voice) information from a spontaneous laughter corpus with dimensional labels was explored. Laughter instances from a three test subject made up the corpus. Audio features extracted from the instances included prosodic features such as pitch, energy, intensity, formants (F1, F2 and F3), pitch contours, and thirteen Mel Frequency Cepstral Coefficients. Visual features included 170 facial distances taken from 68 facial points. Machine learning experiments were then performed in which Support Vector Machines -- Regression yielded the lowest mean absolute error rate of 0.0506 for the facial dataset. Other classifiers used were Linear Regression and Multilayer Perceptron.
Index Terms:
Laughter, Audio Signals, Video Signal, Affect/Emotion Recognition, Emphatic Computing
Citation:
Christopher Galvan, David Manangan, Michael Sanchez, Jason Wong, Jocelynn Cu, "Audiovisual Affect Recognition in Spontaneous Filipino Laughter," kse, pp.266-271, 2011 Third International Conference on Knowledge and Systems Engineering, 2011
Usage of this product signifies your acceptance of the Terms of Use.