Issue No. 03 - July-September (2012 vol. 3)
Daniel J. McDuff , Massachusetts Institute of Technology, Cambridge
Mohammed Ehsan Hoque , Massachusetts Institute of Technology, Cambridge
Rosalind W. Picard , Massachusetts Institute of Technology, Cambridge
We create two experimental situations to elicit two affective states: frustration, and delight. In the first experiment, participants were asked to recall situations while expressing either delight or frustration, while the second experiment tried to elicit these states naturally through a frustrating experience and through a delightful video. There were two significant differences in the nature of the acted versus natural occurrences of expressions. First, the acted instances were much easier for the computer to classify. Second, in 90 percent of the acted cases, participants did not smile when frustrated, whereas in 90 percent of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. As a follow up study, we develop an automated system to distinguish between naturally occurring spontaneous smiles under frustrating and delightful stimuli by exploring their temporal patterns given video of both. We extracted local and global features related to human smile dynamics. Next, we evaluated and compared two variants of Support Vector Machine (SVM), Hidden Markov Models (HMM), and Hidden-state Conditional Random Fields (HCRF) for binary classification. While human classification of the smile videos under frustrating stimuli was below chance, an accuracy of 92 percent distinguishing smiles under frustrating and delighted stimuli was obtained using a dynamic SVM classifier.
Avatars, Computers, Face, Cameras, Speech, Humans, Filling, smile while frustrated, Expressions classification, temporal patterns, natural dataset, natural versus acted data
Daniel J. McDuff, Mohammed Ehsan Hoque, Rosalind W. Picard, "Exploring Temporal Patterns in Classifying Frustrated and Delighted Smiles", IEEE Transactions on Affective Computing, vol. 3, no. , pp. 323-334, July-September 2012, doi:10.1109/T-AFFC.2012.11