Subscribe
Issue No.03 - July-September (2012 vol.3)
pp: 323-334
Mohammed Ehsan Hoque , Massachusetts Institute of Technology, Cambridge
Daniel J. McDuff , Massachusetts Institute of Technology, Cambridge
Rosalind W. Picard , Massachusetts Institute of Technology, Cambridge
ABSTRACT
We create two experimental situations to elicit two affective states: frustration, and delight. In the first experiment, participants were asked to recall situations while expressing either delight or frustration, while the second experiment tried to elicit these states naturally through a frustrating experience and through a delightful video. There were two significant differences in the nature of the acted versus natural occurrences of expressions. First, the acted instances were much easier for the computer to classify. Second, in 90 percent of the acted cases, participants did not smile when frustrated, whereas in 90 percent of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. As a follow up study, we develop an automated system to distinguish between naturally occurring spontaneous smiles under frustrating and delightful stimuli by exploring their temporal patterns given video of both. We extracted local and global features related to human smile dynamics. Next, we evaluated and compared two variants of Support Vector Machine (SVM), Hidden Markov Models (HMM), and Hidden-state Conditional Random Fields (HCRF) for binary classification. While human classification of the smile videos under frustrating stimuli was below chance, an accuracy of 92 percent distinguishing smiles under frustrating and delighted stimuli was obtained using a dynamic SVM classifier.
INDEX TERMS
Avatars, Computers, Face, Cameras, Speech, Humans, Filling, smile while frustrated, Expressions classification, temporal patterns, natural dataset, natural versus acted data
CITATION
Mohammed Ehsan Hoque, Daniel J. McDuff, Rosalind W. Picard, "Exploring Temporal Patterns in Classifying Frustrated and Delighted Smiles", IEEE Transactions on Affective Computing, vol.3, no. 3, pp. 323-334, July-September 2012, doi:10.1109/T-AFFC.2012.11
REFERENCES
[1] H. Gunes and M. Pantic, “Automatic, Dimensional and Continuous Emotion Recognition,” Int'l J. Synthetic Emotion, vol. 1, no. 1, pp. 68-99, 2010.
[2] D. Keltner and P. Ekman, “Facial Expression of Emotion,” Handbook of Emotions, M. Lewis and J.M. Haviland-Jones, eds., pp. 236-249, Guilford Press, 2000.
[3] P.N. Juslin and K.R. Scherer, “Vocal Expression of Affect,” J. Harrigan, The New Handbook of Methods in Nonverbal Behavior Research, R. Rosenthal and K. Scherer, eds., pp. 65-135, Oxford Univ. Press., 2005.
[4] M.S. Bartlett, G.C. Littlewort, M.G. Frank, C. Lainscsek, I.R. Fasel, and J.R. Movellan, “Automatic Recognition of Facial Actions in Spontaneous Expressions,” J. Multimedia, pp. 1-14, Oct. 2006.
[5] E. Douglas-Cowie, R. Cowie, I. Sneddon, C. Cox, O. Lowry, M. McRorie, J. Martin, L. Devillers, S. Abrilian, A. Batliner, N. Amir, and K. Karpouzis, “The Humaine Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data,” Proc. Second Int'l Conf. Affective Computing and Intelligent Interaction, pp. 488-501, Jan. 2007.
[6] G. McKeown, M.F. Valstar, R. Cowie, and M. Pantic, “The SEMAINE Corpus of Emotionally Coloured Character Interactions,” Proc. IEEE Int'l Conf. Multimedia and Expo, pp. 1079-1084, July 2010.
[7] S. Baron-Cohen et al., Mind Reading: The Interactive Guide to Emotions. Jessica Kingsley Publishers, 2004.
[8] M. Pantic, M.F. Valstar, R. Rademaker, and L. Maat, “Web-Based Database for Facial Expression Analysis,” Proc. IEEE Int'l Conf. Multimedia and Expo, pp. 317-321, July 2005.
[9] M.E. Hoque and R.W. Picard, “I See You (ICU): Towards Robust Recognition of Facial Expressions and Speech Prosody in Real Time,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2010.
[10] C. Kueblbeck and A. Ernst, “Face Detection and Tracking in Video Sequences Using the Modified Census Transformation,” J. Image and Vision Computing, vol. 24, no. 6, pp. 564-572, 2006.
[11] C. Chang and C. Lin, “LIBSVM: a Library for Support Vector Machines,” ACM Trans. Intelligent Systems and Technology, vol. 2, no. 3, pp. 27:1-27:27, 2011.
[12] K. Murphy, “The Bayes Net Toolbox for Matlab,” Computing Science and Statistics, vol. 33, pp. 331-350, 2001.
[13] L.-P. Morency, Hidden-State Conditional Random Field Library, 2007.
[14] J. Lafferty, A. McCallum, and F. Pereira, “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,” Proc. 18th Int'l Conf. Machine Learning, pp. 282-289, 2001.
[15] M. Pantic, “Machine Analysis of Facial Behaviour: Naturalistic and Dynamic Behavior,” Philosophical Trans. Royal Soc. B, vol. 364, pp. 3505-3513, Dec. 2009.
[16] K. Schneider and I. Josephs, “The Expressive and Communicative Functions of Preschool Children's Smiles in an Achievement Situation,” J. Nonverbal Behavior, vol. 15, pp. 185-198, 1991.
[17] C. Küblbeck, T. Ruf, and A. Ernst, “A Modular Framework to Detect and Analyze Faces for Audience Measurement Systems,” GI Jahrestagung: Proc. Second Workshop Pervasive Advertising, pp. 3941-3953, 2009.
[18] D. Messinger, A. Fogel, and K.L. Dickson, “What's in a Smile?” Developmental Psychology, vol. 35, no. 3, pp. 701-708, 1999.
[19] B. Reeves and C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge Univ. Press, 1996.
[20] E.L. Rosenberg and P. Ekman, “Coherence between Expressive and Experimental Systems in Emotion,” Cognition and Emotion, vol. 8, pp. 201-229, 1994.
[21] J.M. Carroll and J.A. Russell, “Facial Expressions in Hollywood's Portrayal of Emotion,” J. Personality and Social Psychology, vol. 72, no. 1, pp. 164-176, 1007.
[22] A. Ortony and T.J. Turner, “What's Basic about Basic Emotions?” Psychology Rev., vol. 74, 1990. pp. 315-341,
[23] A.J. Fridlund, Human Facial Expression: An Evolutionary View. Academic Press, 1994.
[24] J. Fernández-Dols, F. Sánchez, P. Carrera, and M. Ruiz-Belda, “Are Spontaneous Expressions and Emotions Linked? An Experiment Test of Coherence,” J. Nonvebal Behavior, vol. 21, no. 3, pp. 163-177, 1997.
[25] P. Ekman, W.V. Friesen, and P. Ellsworth, Emotion in the Human Face. Pergamon Press, 1972.
[26] A.J. Fridlund, “Sociality of Social Smiling: Potentiation by an Implicit Audience,” J. Personality and Social Psychology, vol. 60, pp. 229-240, 1991.
[27] U. Hess, R. Banse, and A. Kappas, “The Intensity of Facial Expression Is Determined by Underlying Affective State and Social Situation,” J. Personality and Social Psychology, vol. 69, pp. 280-288, 1995.
[28] R.E. Kraut and R.E. Johnston, “Social and Emotional Messages of Smiling: An Ethological Approach,” J. Personality and Social Psychology, vol. 37, pp. 1539-1553, 1979.
[29] R. El Kaliouby, P. Robinson, and S. Keates, “Temporal Context and the Recognition of Emotion from Facial Expression,” Proc. 10th Int'l Conf. Human-Computer Interaction, pp. 22-27, June 2003.
[30] M.E. Hoque and R.W. Picard, “Acted vs. Natural Frustration and Delight: Many People Smile in Natural Frustration,” Proc. Ninth IEEE Int'l Conf. Automatic Face and Gesture Recognition, Mar. 2011.
[31] S. Baron-Cohen, H.A. Ring, E.T. Bullmore, S. Wheelright, C. Ashwin, and S.C.R. Williams, “The Amygdala Theory of Autism,” Neuroscience and Behavioual Rev., vol. 24, pp. 355-364, 2000.
[32] M.A. Howard, P.E. Cowell, J. Boucher, P. Brooks, A. Mayes, A. Farrant, and N. Roberts, “Convergent Neuroanatomical and Behavioural Evidence of an Amygdala Hypothesis of Autism,” Neuroreport, vol. 11, pp. 2931-2935, 2000.
[33] FaceTracker, Facial Feature Tracking SD., Neven Vision, 2002.