The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January (2009 vol.31)
pp: 39-58
Zhihong Zeng , University of Illinois at Urbana-Champaign, Urbana
Maja Pantic , Imperial College London, London and the University of Twente, Netherlands
Glenn I. Roisman , University of Illinois at Urbana-Champaign, Urbana
Thomas S. Huang , University of Illinois at Urbana-Champaign, Urbana
ABSTRACT
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.
INDEX TERMS
Introductory and Survey, Human-centered computing, Evaluation/methodology
CITATION
Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang, "A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.31, no. 1, pp. 39-58, January 2009, doi:10.1109/TPAMI.2008.52
REFERENCES
[1] N. Ambady and R. Rosenthal, “Thin Slices of Expressive Behavior as Predictors of Interpersonal Consequences,” A Meta-Analysis: Psychological Bull., vol. 111, no. 2, pp. 256-274, 1992.
[2] K. Anderson and P.W. McOwan, “A Real-Time Automated System for Recognition of Human Facial Expressions,” IEEE Trans. Systems, Man, and Cybernetics Part B, vol. 36, no. 1, pp. 96-105, 2006.
[3] J. Ang et al., “Prosody-Based Automatic Detection of Annoyance and Frustration in Human-Computer Dialog,” Proc. Eighth Int'l Conf. Spoken Language Processing (ICSLP), 2002.
[4] A.B. Ashraf, S. Lucey, J.F. Cohn, T. Chen, Z. Ambadar, K. Prkachin, P. Solomon, and B.J. Theobald, “The Painful Face: Pain Expression Recognition Using Active Appearance Models,” Proc. Ninth ACM Int'l Conf. Multimodal Interfaces (ICMI '07), pp. 9-14, 2007.
[5] T. Athanaselis, S. Bakamidis, I. Dologlou, R. Cowie, E. Douglas-Cowie, and C. Cox, “ASR for Emotional Speech: Clarifying the Issues and Enhancing Performance,” Neural Networks, vol. 18, pp.437-444, 2005.
[6] A. Austermann, N. Esau, L. Kleinjohann, and B. Kleinjohann, “Prosody-Based Emotion Recognition for MEXI,” Proc. IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS '05), pp. 1138-1144, 2005.
[7] T. Balomenos, A. Raouzaiou, S. Ioannou, A. Drosopoulos, K. Karpouzis, and S. Kollias, “Emotion Analysis in Man-Machine Interaction Systems,” LNCS 3361, pp. 318-328, 2005.
[8] R. Banse and K.R. Scherer, “Acoustic Profiles in Vocal Emotion Expression,” J. Personality Social Psychology, vol. 70, no. 3, pp. 614-636, 1996.
[9] M.S. Bartlett, G. Littlewort, P. Braathen, T.J. Sejnowski, and J.R. Movellan, “A Prototype for Automatic Recognition of Spontaneous Facial Actions,” Advances in Neural Information Processing Systems, vol. 15, pp. 1271-1278, 2003.
[10] M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, “Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '05), pp. 568-573, 2005.
[11] M.S. Bartlett, G. Littlewort, M.G. Frank, C. Lainscsek, I. Fasel, and J. Movellan, “Fully Automatic Facial Action Recognition in Spontaneous Behavior,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR '06), pp. 223-230, 2006.
[12] A. Batliner, K. Fischer, R. Hubera, J. Spilkera, and E. Noth, “How to Find Trouble in Communication,” Speech Comm., vol. 40, pp.117-143, 2003.
[13] A. Batliner et al., “You Stupid Tin Box—Children Interacting with the AIBO Robot: A Cross-Linguistic Emotional Speech,” Proc. Fourth Int'l Conf. Language Resources and Evaluation, 2004.
[14] C. Blouin and V. Maffiolo, “A Study on the Automatic Detection and Characterization of Emotion in a Voice Service Context,” Proc. Ninth European Conf. Speech Comm. and Technology (INTERSPEECH '05), pp. 469-472, 2005.
[15] S. Burger, V. MacLaren, and H. Yu, “The ISL Meeting Corpus: The Impact of Meeting Type on Speech Style,” Proc. Eighth Int'l Conf. Spoken Language Processing (ICSLP), 2002.
[16] C. Busso et al., “Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information,” Proc. Sixth ACM Int'l Conf. Multimodal Interfaces (ICMI '04), pp. 205-211, 2004.
[17] G. Caridakis, L. Malatesta, L. Kessous, N. Amir, A. Paouzaiou, and K. Karpouzis, “Modeling Naturalistic Affective States via Facial and Vocal Expression Recognition,” Proc. Eighth ACM Int'l Conf. Multimodal Interfaces (ICMI '06), pp. 146-154, 2006.
[18] Y. Chang, C. Hu, and M. Turk, “Probabilistic Expression Analysis on Manifolds,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '04), vol. 2, pp. 520-527, 2004.
[19] Y. Chang, C. Hu, R. Feris, and M. Turk, “Manifold Based Analysis of Facial Expression,” J. Image and Vision Computing, vol. 24, no. 6, pp. 605-614, 2006.
[20] Y. Chang, M. Vieira, M. Turk, and L. Velho, “Automatic 3D Facial Expression Analysis in Videos,” Proc. IEEE Int'l Workshop Analysis and Modeling of Faces and Gestures (AMFG '05), vol. 3723, pp. 293-307, 2005.
[21] L.S. Chen, “Joint Processing of Audio-Visual Information for the Recognition of Emotional Expressions in Human-Computer Interaction,” PhD dissertation, Univ. of Illinois, Urbana-Champaign, 2000.
[22] L. Chen, T.S. Huang, T. Miyasato, and R. Nakatsu, “Multimodal Human Emotion/Expression Recognition,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR '98), pp. 396-401, 1998.
[23] J.A. Coan and J.J.B. Allen, Handbook of Emotion Elicitation and Assessment. Oxford Univ. Press, 2007.
[24] I. Cohen, F. Cozman, N. Sebe, M. Cirelo, and T.S. Huang, “Semi-Supervised Learning of Classifiers: Theory, Algorithms, and Their Applications to Human-Computer Interaction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 12, pp. 1553-1567, Dec. 2004.
[25] L. Cohen, N. Sebe, A. Garg, L. Chen, and T. Huang, “Facial Expression Recognition from Video Sequences: Temporal and Static Modeling,” Computer Vision and Image Understanding, vol. 91, nos. 1-2, pp. 160-187, 2003.
[26] J.F. Cohn, “Foundations of Human Computing: Facial Expression and Emotion,” Proc. Eighth ACM Int'l Conf. Multimodal Interfaces (ICMI '06), pp. 233-238, 2006.
[27] J.F. Cohn, L.I. Reed, Z. Ambadar, J. Xiao, and T. Moriyama, “Automatic Analysis and Recognition of Brow Actions and Head Motion in Spontaneous Facial Behavior,” Proc. IEEE Int'l Conf. Systems, Man, and Cybernetics (SMC '04), vol. 1, pp. 610-616, 2004.
[28] J.F. Cohn and K.L. Schmidt, “The Timing of Facial Motion in Posed and Spontaneous Smiles,” Int'l J. Wavelets, Multiresolution and Information Processing, vol. 2, pp. 1-12, 2004.
[29] J.F. Cohn and E.Z. Tronick, “Mother-Infant Interaction: The Sequence of Dyadic States at Three, Six, and Nine Months,” Development Psychology, vol. 23, pp. 68-77, 1988.
[30] R. Cowie, E. Douglas-Cowie, S. Savvidou, E. McMahon, M. Sawey, and M. Schröder, “‘Feeltrace’: An Instrument for Recording Perceived Emotion in Real Time,” Proc. ISCA Workshop Speech and Emotion, pp. 19-24, 2000.
[31] R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J.G. Taylor, “Emotion Recognition in Human-Computer Interaction,” IEEE Signal Processing Magazine, pp. 32-80, Jan. 2001.
[32] R. Cowie, E. Douglas-Cowie, and C. Cox, “Beyond Emotion Archetypes: Databases for Emotion Modeling Using Neural Networks,” Neural Networks, vol. 18, pp. 371-388, 2005.
[33] F. Dellaert, T. Polzin, and A. Waibel, “Recognizing Emotion in Speech,” Proc. Fourth Int'l Conf. Spoken Language Processing (ICSLP '96), pp. 1970-1973, 1996.
[34] L. Devillers and I. Vasilescu, “Reliability of Lexical and Prosodic Cues in Two Real-Life Spoken Dialog Corpora,” Proc. Fourth Int'l Conf. Language Resources and Evaluation (LREC), 2004.
[35] L. Devillers and I. Vasilescu, “Real-Life Emotions Detection with Lexical and Paralinguistic Cues on Human-Human Call Center Dialogs,” Proc. Ninth Int'l Conf. Spoken Language Processing (ICSLP), 2006.
[36] I. Vasilescu and L. Devillers, “Detection of Real-Life Emotions in Call Centers,” Proc. Ninth European Conf. Speech Comm. and Technology (INTERSPEECH), 2005.
[37] L. Devillers, L. Vidrascu, and L. Lamel, “Challenges in Real-Life Emotion Annotation and Machine Learning Based Detection,” Neural Networks, vol. 18, pp. 407-422, 2005.
[38] E. Douglas-Cowie, N. Campbell, R. Cowie, and P. Roach, “Emotional Speech: Towards a New Generation of Database,” Speech Comm., vol. 40, nos. 1/2, pp. 33-60, 2003.
[39] Z. Duric, W.D. Gray, R. Heishman, F. Li, A. Rosenfeld, M.J. Schoelles, C. Schunn, and H. Wechsler, “Integrating Perceptual and Cognitive Modeling for Adaptive and Intelligent Human-Computer Interaction,” Proc. IEEE, vol. 90, no. 7, pp. 1272-1289, 2002.
[40] P. Ekman, “Universals and Cultural Differences in Facial Expressions of Emotion,” Proc. Nebraska Symp. Motivation, pp. 207-283, 1971.
[41] Emotion in the Human Face, P. Ekman, ed., second ed. Cambridge Univ. Press, 1982.
[42] P. Ekman, “Strong Evidence for Universals in Facial Expressions: A Reply to Russell's Mistaken Critique,” Psychological Bull., vol. 115, no. 2, pp. 268-287, 1994.
[43] P. Ekman, W.V. Friesen, and J.C. Hager, “Facial Action Coding System,” A Human Face, 2002.
[44] NSF Understanding the Face: A Human Face eStore, P. Ekman, T.S.Huang, T.J. Sejnowski, and J.C. Hager, eds., 1993.
[45] P. Ekman, D. Matsumoto, and W.V. Friesen, “Facial Expression in Affective Disorders,” What the Face Reveals, P. Ekman and E.L. Rosenberg, eds., pp. 429-439, 2005.
[46] P. Ekman and H. Oster, “Facial Expressions of Emotion,” Ann. Rev. Psychology, vol. 30, pp. 527-554, 1979.
[47] P. Ekman and E.L. Rosenberg, What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System, second ed. Oxford Univ. Press, 2005.
[48] R. El Kaliouby and P. Robinson, “Real-Time Inference of Complex Mental States from Facial Expression and Head Gestures,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '04), vol. 3, p. 154, 2004.
[49] B. Fasel and J. Luttin, “Automatic Facial Expression Analysis: Survey,” Pattern Recognition, vol. 36, no. 1, pp. 259-275, 2003.
[50] B. Fasel, F. Monay, and D. Gatica-Perez, “Latent Semantic Analysis of Facial Action Codes for Automatic Facial Expression Recognition,” Proc. Sixth ACM Int'l Workshop Multimedia Information Retrieval (MIR '04), pp. 181-188, 2004.
[51] C.-N. Fiechter, “Efficient Reinforcement Learning,” Proc. Seventh Ann. ACM Conf. Computational Learning Theory, pp. 88-97, 1994.
[52] K. Forbes-Riley and D. Litman, “Predicting Emotion in Spoken Dialogue from Multiple Knowledge Sources,” Proc. Human Language Technology Conf. North Am. Chapter of the Assoc. Computational Linguistics (HLT/NAACL), 2004.
[53] F. Fragopanagos and J.G. Taylor, “Emotion Recognition in Human-Computer Interaction,” Neural Networks, vol. 18, pp. 389-405, 2005.
[54] E. Fried, “The Impact of Nonverbal Communication of Facial Affect on Children's Learning,” PhD dissertation, Rutgers Univ., 1976.
[55] G. Furnas, T. Landauer, L. Gomes, and S. Dumais, “The Vocabulary Problem in Human-System Communication,” Comm. ACM, vol. 30, no. 11, pp. 964-972, 1987.
[56] H.J. Go, K.C. Kwak, D.J. Lee, and M.G. Chun, “Emotion Recognition from Facial Image and Speech Signal,” Proc. Int'l Conf. Soc. of Instrument and Control Engineers, pp.2890-2895, 2003.
[57] M. Graciarena, E. Shriberg, A. Stolcke, F. Enos, J. Hirschberg, and S. Kajarekar, “Combining Prosodic, Lexical and Cepstral Systems for Deceptive Speech Detection,” Proc. Int'l Conf. Acoustics, Speech and Signal Processing (ICASSP '06), vol. I, pp.1033-1036, 2006.
[58] M. Greenwald, E. Cook, and P. Lang, “Affective Judgment and Psychophysiological Response: Dimensional Covariation in the Evaluation of Pictorial Stimuli,” J. Psychophysiology, vol. 3, pp. 51-64, 1989.
[59] R. Gross, “Face Databases,” Handbook of Face Recognition, S.Z. Li and A.K. Jain, eds., pp. 301-328, Springer, 2005.
[60] H. Gu and Q. Ji, “An Automated Face Reader for Fatigue Detection,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR '04), pp. 111-116, 2004.
[61] H. Gunes and M. Piccardi, “Affect Recognition from Face and Body: Early Fusion versus Late Fusion,” Proc. IEEE Int'l Conf. Systems, Man, and Cybernetics (SMC '05), pp. 3437-3443, 2005.
[62] H. Gunes and M. Piccardi, “Fusing Face and Body Display for Bi-Modal Emotion Recognition: Single Frame Analysis and Multi-Frame Post-Integration,” Proc. First Int'l Conf. Affective Computing and Intelligent Interaction (ACII '05), pp. 102-111, 2005.
[63] H. Gunes and M. Piccardi, “A Bimodal Face and Body Gesture Database for Automatic Analysis of Human Nonverbal Affective Behavior,” Proc. 18th Int'l Conf. Pattern Recognition (ICPR '06), vol. 1, pp. 1148-1153, 2006.
[64] G. Guo and C.R. Dyer, “Learning from Examples in the Small Sample Case: Face Expression Recognition,” IEEE Trans. Systems, Man, and Cybernetics Part B, vol. 35, no. 3, pp. 477-488, 2005.
[65] J. Hirschberg, S. Benus, J.M. Brenier, F. Enos, and S. Friedman, “Distinguishing Deceptive from Non-Deceptive Speech,” Proc. Ninth European Conf. Speech Comm. and Technology (INTERSPEECH '05), pp. 1833-1836, 2005.
[66] S. Hoch, F. Althoff, G. McGlaun, and G. Rigoll, “Bimodal Fusion of Emotional Data in an Automotive Environment,” Proc. 30th Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '05), vol. II, pp. 1085-1088, 2005.
[67] S. Ioannou, A. Raouzaiou, V. Tzouvaras, T. Mailis, K. Karpouzis, and S. Kollias, “Emotion Recognition through Facial Expression Analysis Based on a Neurofuzzy Method,” Neural Networks, vol. 18, pp. 423-435, 2005.
[68] A. Jaimes and N. Sebe, “Multimodal Human Computer Interaction: A Survey,” Proc. 11th IEEE Int'l Workshop Human Computer Interaction (HCI), 2005.
[69] Q. Ji, P. Lan, and C. Looney, “A Probabilistic Framework for Modeling and Real-Time Monitoring Human Fatigue,” IEEE Systems, Man, and Cybernetics Part A, vol. 36, no. 5, pp. 862-875, 2006.
[70] P.N. Juslin and K.R. Scherer, “Vocal Expression of Affect,” The New Handbook of Methods in Nonverbal Behavior Research, J.Harrigan, R. Rosenthal, and K. Scherer, eds., Oxford Univ. Press, 2005.
[71] T. Kanade, J. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” Proc. IEEE Int'l Conf. Face and Gesture Recognition (AFGR '00), pp. 46-53, 2000.
[72] A. Kapoor, W. Burleson, and R.W. Picard, “Automatic Prediction of Frustration,” Int'l J. Human-Computer Studies, vol. 65, no. 8, pp.724-736, 2007.
[73] A. Kapoor and R.W. Picard, “Multimodal Affect Recognition in Learning Environment,” Proc. 13th ACM Int'l Conf. Multimedia (Multimedia '05), pp. 677-682, 2005.
[74] K. Karpouzis, G. Caridakis, L. Kessous, N. Amir, A. Raouzaiou, L. Malatesta, and S. Kollias, “Modeling Naturalistic Affective States via Facial, Vocal, and Bodily Expression Recognition,” LNAI 4451, pp. 91-112, 2007.
[75] D. Keltner, “Signs of Appeasement: Evidence for the Distinct Displays of Embarrassment, Amusement and Shame,” J. Personality and Social Psychology, vol. 68, no. 3, pp. 441-454, 1995.
[76] H. Kobayashi and F. Hara, “The Recognition of Basic Facial Expressions by Neural Network,” Proc. IEEE Int'l Joint Conf. Neural Networks (IJCNN '91), pp. 460-466, 1991.
[77] I. Kotsia and I. Pitas, “Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines,” IEEE Trans. Image Processing, vol. 16, no. 1, pp.172-187, 2007.
[78] L.I. Kuncheva, Combining Pattern Classifier: Methods and Algorithms. John Wiley & Sons, 2004.
[79] O.W. Kwon, K. Chan, J. Hao, and T.W. Lee, “Emotion Recognition by Speech Signals,” Proc. Eighth European Conf. Speech Comm. and Technology (EUROSPEECH), 2003.
[80] K. Laskowski and S. Burger, “Annotation and Analysis of Emotionally Relevant Behavior in the ISL Meeting Corpus,” Proc. Fifth Int'l Conf. Language Resources and Evaluation (LREC), 2006.
[81] C. Lee and A. Elgammal, “Facial Expression Analysis Using Nonlinear Decomposable Generative Models,” Proc. Second IEEE Int'l Workshop Analysis and Modeling of Faces and Gestures (AMFG), 2005.
[82] C. Lee and S. Narayanan, “Emotion Recognition Using a Data-Driven Fuzzy Inference System,” Proc. Eighth European Conf. Speech Comm. and Technology (EUROSPEECH '03), pp. 157-160, 2003.
[83] C.M. Lee and S.S. Narayanan, “Toward Detecting Emotions in Spoken Dialogs,” IEEE Trans. Speech and Audio Processing, vol. 13, no. 2, pp. 293-303, 2005.
[84] J. Liscombe, J. Hirschberg, and J.J. Venditti, “Detecting Certainness in Spoken Tutorial Dialogues,” Proc. Ninth European Conf. Speech Comm. and Technology (INTERSPEECH), 2005.
[85] C.L. Lisetti and F. Nasoz, “MAUI: A Multimodal Affective User Interface,” Proc. 10th ACM Int'l Conf. Multimedia (Multimedia '02), pp. 161-170, 2002.
[86] D.J. Litman and K. Forbes-Riley, “Predicting Student Emotions in Computer-Human Tutoring Dialogues,” Proc. 42nd Ann. Meeting of the Assoc. Computational Linguistics (ACL '04), July 2004.
[87] G.C. Littlewort, M.S. Bartlett, and K. Lee, “Faces of Pain: Automated Measurement of Spontaneous Facial Expressions of Genuine and Posed Pain,” Proc. Ninth ACM Int'l Conf. Multimodal Interfaces (ICMI '07), pp. 15-21, 2007.
[88] S. Lucey, A.B. Ashraf, and J.F. Cohn, “Investigating Spontaneous Facial Action Recognition through AAM Representations of the Face,” Face Recognition, K. Delac, and M. Grgic, eds., pp. 275-286, I-Tech Education and Publishing, 2007.
[89] L. Maat and M. Pantic, “Gaze-X: Adaptive Affective Multimodal Interface for Single-User Office Scenarios,” Proc. Eighth ACM Int'l Conf. Multimodal Interfaces (ICMI '06), pp. 171-178, 2006.
[90] K. Mase, “Recognition of Facial Expression from Optical Flow,” IEICE Trans., vol. 74, no. 10, pp. 3474-3483, 1991.
[91] S. Matos, S.S. Birring, I.D. Pavord, and D.H. Evans, “Detection of Cough Signals in Continuous Audio Recordings Using HMM,” IEEE Trans. Biomedical Eng., vol. 53, no. 6, pp. 1078-1083, 2006.
[92] A. Mehrabian, “Communication with Words,” Psychology Today, vol. 2, no. 4, pp. 53-56, 1968.
[93] S. Mozziconacci, “Prosody and Emotions,” Proc. First Int'l Conf. Speech Prosody (Speech Prosody '02), pp. 1-9, 2002.
[94] D. Neiberg, K. Elenius, and K. Laskowski, “Emotion Recognition in Spontaneous Speech Using GMM,” Proc. Int'l Conf. Spoken Language Processing (ICSLP '06), pp. 809-812, 2006.
[95] A.J. O'Toole et al., “A Video Database of Moving Faces and People,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 812-816, May 2005.
[96] P.-Y. Oudeyer, “The Production and Recognition of Emotions in Speech: Features and Algorithms,” Int'l J. Human-Computer Studies, vol. 59, pp. 157-183, 2003.
[97] P. Pal, A.N. Iyer, and R.E. Yantorno, “Emotion Detection from Infant Facial Expressions and Cries,” Proc. IEEE Int'l Conf. Acoustics, Speech and Signal Processing (ICASSP '06), vol. 2, pp.721-724, 2006.
[98] M. Pantic and M.S. Bartlett, “Machine Analysis of Facial Expressions,” Face Recognition, K. Delac and M. Grgic, eds., pp.377-416, I-Tech Education and Publishing, 2007.
[99] M. Pantic and I. Patras, “Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments Form Face Profile Image Sequences,” IEEE Trans. Systems, Man, and Cybernetics Part B, vol. 36, no. 2, pp. 433-449, 2006.
[100] M. Pantic, A. Pentland, A. Nijholt, and T.S. Huang, “Human Computing and Machine Understanding of Human Behavior: A Survey,” Proc. Eighth ACM Int'l Conf. Multimodal Interfaces (ICMI '06), pp. 239-248, 2006.
[101] M. Pantic and L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, Dec. 2000.
[102] M. Pantic and L.J.M. Rothkrantz, “Toward an Affect-Sensitive Multimodal Human-Computer Interaction,” Proc. IEEE, vol. 91, no. 9, pp. 1370-1390, Sept. 2003.
[103] M. Pantic and L.J.M. Rothkrantz, “Facial Action Recognition for Facial Expression Analysis from Static Face Images,” IEEE Trans. Systems, Man, and Cybernetics Part B, vol. 34, no. 3, pp. 1449-1461, 2004.
[104] M. Pantic and L.J.M. Rothkrantz, “Case-Based Reasoning for User-Profiled Recognition of Emotions from Face Images,” Proc. 12th ACM Int'l Conf. Multimedia (Multimedia '04), pp. 391-394, 2004.
[105] M. Pantic, N. Sebe, J.F. Cohn, and T. Huang, “Affective Multimodal Human-Computer Interaction,” Proc. 13th ACM Int'l Conf. Multimedia (Multimedia '05), pp. 669-676, 2005.
[106] M. Pantic, M.F. Valstar, R. Rademaker, and L. Maat, “Web-Based Database for Facial Expression Analysis,” Proc. 13th ACM Int'l Conf. Multimedia (Multimedia '05), pp. 317-321, 2005.
[107] A. Pentland, “Socially Aware, Computation and Communication,” Computer, vol. 38, pp. 33-40, 2005.
[108] S. Petridis and M. Pantic, “Audiovisual Discrimination between Laughter and Speech,” IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP), pp. 5117-5120, 2008.
[109] R.W. Picard, Affective Computing. MIT Press, 1997.
[110] R. Plutchik, Emotion: A Psychoevolutionary Synthesis. Harper and Row, 1980.
[111] G.I. Roisman, J.L. Tsai, and K.S. Chiang, “The Emotional Integration of Childhood Experience: Physiological, Facial Expressive, and Self-Reported Emotional Response during the Adult Attachment Interview,” Developmental Psychology, vol. 40, no. 5, pp. 776-789, 2004.
[112] Proc. Int'l Workshop Multiple Classifier Systems (MCS), F. Roli et al., eds., 2001-2005.
[113] J.A. Russell, J. Bachorowski, and J. Fernandez-Dols, “Facial and Vocal Expressions of Emotion,” Ann. Rev. Psychology, vol. 54, pp.329-349, 2003.
[114] J. Russell and A. Mehrabian, “Evidence for a Three-Factor Theory of Emotions,” J. Research in Personality, vol. 11, pp. 273-294, 1977.
[115] A. Samal and P.A. Iyengar, “Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey,” Pattern Recognition, vol. 25, no. 1, pp. 65-77, 1992.
[116] D. Sander, D. Grandjean, and K.R. Scherer, “A System Approach to Appraisal Mechanisms in Emotion,” Neural Networks, vol. 18, pp. 317-352, 2005.
[117] K.R. Scherer, “Appraisal Theory,” Handbook of Cognition and Emotion, T. Dalgleish and M.J. Power, eds., pp. 637-663, Wiley, 1999.
[118] B. Schuller, R. Muller, B. Hornler, A. Hothker, H. Konosu, and G. Rigoll, “Audiovisual Recognition of Spontaneous Interest within Conversations,” Proc. Ninth ACM Int'l Conf. Multimodal Interfaces (ICMI '07), pp. 30-37, 2007.
[119] B. Schuller, J. Stadermann, and G. Rigoll, “Affect-Robust Speech Recognition by Dynamic Emotional Adaptation,” Proc. Speech Prosody, Special Session on Prosody in Automatic Speech Recognition, 2006.
[120] B. Schuller, R.J. Villar, G. Rigoll, and M. Lang, “Meta-Classifiers in Acoustic and Linguistic Feature Fusion-Based Affect Recognition,” Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '05), pp. 325-328, 2005.
[121] N. Sebe, I. Cohen, and T.S. Huang, “Multimodal Emotion Recognition,” Handbook of Pattern Recognition and Computer Vision, World Scientific, 2005.
[122] N. Sebe, I. Cohen, T. Gevers, and T.S. Huang, “Emotion Recognition Based on Joint Visual and Audio Cues,” Proc. 18th Int'l Conf. Pattern Recognition (ICPR '06), pp. 1136-1139, 2006.
[123] N. Sebe, M.S. Lew, I. Cohen, Y. Sun, T. Gevers, and T.S. Huang, “Authentic Facial Expression Analysis,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR), 2004.
[124] M. Song, J. Bu, C. Chen, and N. Li, “Audio-Visual-Based Emotion Recognition: A New Approach,” Proc. Int'l Conf. Computer Vision and Pattern Recognition (CVPR '04), pp. 1020-1025, 2004.
[125] S. Steidl, M. Levit, A. Batliner, E. Noth, and H. Niemann, “Off All Things the Measure Is Man: Automatic Classification of Emotions and Inter-Labeler Consistency,” Proc. Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '05), vol. 1, pp. 317-320, 2005.
[126] B. Stein and M.A. Meredith, The Merging of Senses. MIT Press, 1993.
[127] M. Suwa, N. Sugie, and K. Fujimora, “A Preliminary Note on Pattern Recognition of Human Emotional Expression,” Proc. Int'l Joint Conf. Pattern Recognition, pp. 408-410, 1978.
[128] H. Tao and T.S. Huang, “Explanation-Based Facial Motion Tracking Using a Piecewise Bezier Volume Deformation Mode,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '99), vol. 1, pp. 611-617, 1999.
[129] A. Teeters, R.E. Kaliouby, and R.W. Picard, “Self-Cam: Feedback from What Would Be Your Social Partner,” Proc. ACM SIGGRAPH '06, p. 138, 2006.
[130] Y.L. Tian, T. Kanade, and J.F. Cohn, “Facial Expression Analysis,” Handbook of Face Recognition, S.Z. Li and A.K. Jain, eds., pp. 247-276, Springer, 2005.
[131] S.S. Tomkins, Affect, Imagery, Consciousness, vol. 1. Springer, 1962.
[132] Y. Tong, W. Liao, and Q. Ji, “Facial Action Unit Recognition by Exploiting Their Dynamics and Semantic Relationships,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp.1683-1699, 2007.
[133] K.P. Truong and D.A. van Leeuwen, “Automatic Discrimination between Laughter and Speech,” Speech Comm., vol. 49, pp. 144-158, 2007.
[134] M.F. Valstar, H. Gunes, and M. Pantic, “How to Distinguish Posed from Spontaneous Smiles Using Geometric Features,” Proc. Ninth ACM Int'l Conf. Multimodal Interfaces (ICMI '07), pp.38-45, 2007.
[135] M. Valstar, M. Pantic, Z. Ambadar, and J.F. Cohn, “Spontaneous versus Posed Facial Behavior: Automatic Analysis of Brow Actions,” Proc. Eight Int'l Conf. Multimodal Interfaces (ICMI '06), pp. 162-170, 2006.
[136] M. Valstar, M. Pantic, and I. Patras, “Motion History for Facial Action Detection from Face Video,” Proc. IEEE Int'l Conf. Systems, Man, and Cybernetics (SMC '04), vol. 1, pp. 635-640, 2004.
[137] H. Wang and N. Ahuja, “Facial Expression Decomposition,” Proc. Ninth IEEE Int'l Conf. Computer Vision (ICCV '03), p. 958, 2003.
[138] Y. Wang and L. Guan, “Recognizing Human Emotion from Audiovisual Information,” Proc. Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '05), pp. 1125-1128, 2005.
[139] J. Wang, L. Yin, X. Wei, and Y. Sun, “3D Facial Expression Recognition Based on Primitive Surface Feature Distribution,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '06), vol. 2, pp. 1399-1406, 2006.
[140] D. Watson, L.A. Clark, and A. Tellegen, “Development and Validation of Brief Measures of Positive and Negative Affect: The PANAS Scales,” J. Personality and Social Psychology, vol. 54, pp.1063-1070, 1988.
[141] Z. Wen and T.S. Huang, “Capturing Subtle Facial Motions in 3D Face Tracking,” Proc. Ninth IEEE Int'l Conf. Computer Vision (ICCV '03), pp. 1343-1350, 2003.
[142] C.M. Whissell, “The Dictionary of Affect in Language,” Emotion: Theory, Research and Experience. The Measurement of Emotions, R.Plutchik and H. Kellerman, eds., vol. 4, pp. 113-131, Academic Press, 1989.
[143] J. Whitehill and C.W. Omlin, “Haar Features for FACS AU Recognition,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR '06), pp. 217-222, 2006.
[144] A.C. de C. Williams, “Facial Expression of Pain: An Evolutionary Account,” Behavioral and Brain Sciences, vol. 25, no. 4, pp. 439-488, 2002.
[145] C. Williams and K. Stevens, “Emotions and Speech: Some Acoustic Correlates,” J. Acoustic Soc. of Am., vol. 52, no. 4, pp.1238-1250, 1972.
[146] J. Xiao, T. Moriyama, T. Kanade, and J.F. Cohn, “Robust Full-Motion Recovery of Head by Dynamic Templates and Re-Registration Techniques,” Int'l J. Imaging Systems and Technology, vol. 13, no. 1, pp. 85-94, 2003.
[147] M. Yeasin, B. Bullot, and R. Sharma, “Recognition of Facial Expressions and Measurement of Levels of Interest from Video,” IEEE Trans. Multimedia, vol. 8, no. 3, pp. 500-507, June 2006.
[148] L. Yin, X. Wei, Y. Sun, J. Wang, and M.J. Rosato, “A 3D Facial Expression Database for Facial Behavior Research,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR '06), pp.211-216, 2006.
[149] Z. Zeng, Y. Fu, G.I. Roisman, Z. Wen, Y. Hu, and T.S. Huang, “Spontaneous Emotional Facial Expression Detection,” J. Multimedia, vol. 1, no. 5, pp. 1-8, 2006.
[150] Z. Zeng, Y. Hu, M. Liu, Y. Fu, and T.S. Huang, “Training Combination Strategy of Multi-Stream Fused Hidden Markov Model for Audio-Visual Affect Recognition,” Proc. 14th ACM Int'l Conf. Multimedia (Multimedia '06), pp. 65-68, 2006.
[151] Z. Zeng, Y. Hu, G.I. Roisman, Z. Wen, Y. Fu, and T.S. Huang, “Audio-Visual Spontaneous Emotion Recognition,” Artificial Intelligence for Human Computing, T.S. Huang, A. Nijholt, M.Pantic, and A. Pentland, eds., pp. 72-90, Springer, 2007.
[152] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, “A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions,” Proc. Ninth ACM Int'l Conf. Multimodal Interfaces (ICMI '07), pp. 126-133, 2007.
[153] Z. Zeng, J. Tu, M. Liu, T. Zhang, N. Rizzolo, Z. Zhang, T.S. Huang, D. Roth, and S. Levinson, “Bimodal HCI-Related Emotion Recognition,” Proc. Sixth ACM Int'l Conf. Multimodal Interfaces (ICMI '04), pp. 137-143, 2004.
[154] Z. Zeng, J. Tu, P. Pianfetti, M. Liu, T. Zhang, Z. Zhang, T.S. Huang, and S. Levinson, “Audio-Visual Affect Recognition through Multi-Stream Fused HMM for HCI,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '05), pp. 967-972, 2005.
[155] Z. Zeng, J. Tu, M. Liu, T.S. Huang, B. Pianfetti, D. Roth, and S. Levinson, “Audio-Visual Affect Recognition,” IEEE Trans. Multimedia, vol. 9, no. 2, pp. 424-428, Feb. 2007.
[156] Z. Zeng, Z. Zhang, B. Pianfetti, J. Tu, and T.S. Huang, “Audio-Visual Affect Recognition in Activation-Evaluation Space,” Proc. 13th ACM Int'l Conf. Multimedia (Multimedia '05), pp. 828-831, 2005.
[157] T. Zhang, M. Hasegawa-Johnson, and S.E. Levinson, “Children's Emotion Recognition in an Intelligent Tutoring Scenario,” Proc. Eighth European Conf. Speech Comm. and Technology (INTERSPEECH), 2004.
[158] Y. Zhang and Q. Ji, “Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp.699-714, 2005.
[159] Z.-H. Zhou, K.-J. Chen, and H.-B. Dai, “Enhancing Relevance Feedback in Image Retrieval Using Unlabeled Data,” ACM Trans. Information Systems, vol. 24, no. 2, pp. 219-244, 2006.
[160] Z. Zhu and Q. Ji, “Robust Real-Time Face Pose and Facial Expression Recovery,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '06), vol. 1, pp. 681-688, 2006.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool