This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Recognizing Action Units for Facial Expression Analysis
February 2001 (vol. 23 no. 2)
pp. 97-115

Abstract—Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

[1] Facial Expression Coding Project, cooperation and competition between Carnegie Mellon Univ. and Univ. of California, San Diego, unpublished, 2000.
[2] M. Bartlett, J. Hager, P. Ekman, and T. Sejnowski, “Measuring Facial Expressions by Computer Image Analysis,” Psychophysiology, vol. 36, pp. 253-264, 1999.
[3] M.J. Black and Y. Yacoob, "Tracking and Recognizing Rigid and Non-Rigid Facial Motions Using Local Parametric Model of Image Motion," Proc. Int'l Conf. Computer Vision, pp. 374-381,Cambridge, Mass., 1995.
[4] M.J. Black and Y. Yacoob, “Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion,” Int'l J. Computer Vision, vol. 25, no. 1, pp. 23-48, Oct. 1997.
[5] R. Brunelli and T. Poggio, "Face Recognition: Features vs. Templates," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1,042-1,053, Oct. 1993.
[6] J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, June 1986.
[7] J.M. Carroll and J. Russell, “Facial Expression in Hollywood's Portrayal of Emotion,” J. Personality and Social Psychology, vol. 72, pp. 164-176, 1997.
[8] J.F. Cohn, A.J. Zlochower, J. Lien, and T. Kanade, “Automated Face Analysis by Feature Point Tracking has High Concurrent Validity with Manual Faces Coding,” Psychophysiology, vol. 36, pp. 35-43, 1999.
[9] C. Darwin, The Expression of Emotions in Man and Animals, John Murray, reprinted by Univ. of Chicago Press, 1965, 1872.
[10] G. Donato, M.S. Bartlett, J.C. Hager, P. Ekman, and T.J. Sejnowski, “Classifying Facial Actions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 10, pp. 974-989, Oct. 1999.
[11] I. Eibl-Eihesfeldt, Human Ethology. New York: Aldine de Gruvter, 1989.
[12] P. Ekman, “Facial Expression and Emotion,” Am. Psychologist, vol. 48, pp. 384-392, 1993.
[13] P. Ekman and W.V. Friesen, Pictures of Facial Affect. Palo Alto, Calif.: Consulting Psychologist, 1976.
[14] P. Ekman and W.V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement. San Francisco: Consulting Psychologists Press, 1978.
[15] P. Ekman, J. Hager, C.H. Methvin, and W. Irwin, “Ekman-Hager Facial Action Exemplars,” unpublished data, Human Interaction Laboratory, Univ. of California, San Francisco.
[16] I.A. Essa and A.P. Pentland, “Coding, Analysis, Interpretation, and Recognition of Facial Expressions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 757-763, July 1997.
[17] J. Fleiss, Statistical Methods for Rates and Proportions. New York: Wiley, 1981.
[18] K. Fukui and O. Yamaguchi, “Facial Feature Point Extraction Method Based on Combination of Shape Extraction and Pattern Matching,” Systems and Computers in Japan, vol. 29, no. 6, pp. 49-58, 1998.
[19] C. Izard, L. Dougherty, and E.A. Hembree, “A System for Identifying Affect Expressions by Holistic Judgments,” unpublished manuscript, Univ. of Delaware, 1983.
[20] T. Kanade, J. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” Proc. Int'l Conf. Face and Gesture Recognition, pp. 46-53, Mar. 2000.
[21] M. Kirby and L. Sirovich,“Application of Karhunen-Loève procedure for the characterization of human faces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, Jan. 1990.
[22] Y. Kwon and N. Lobo, “Age Classification from Facial Images,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 762-767, 1994.
[23] K. Lam and H. Yan, “Locating and Extracting the Eye in Human Face Images,” Pattern Recognition, vol. 29, no. 5, pp. 771-779, 1996.
[24] J.-J.J. Lien, T. Kanade, J.F. Cohn, and C.C. Li, “Detection, Tracking, and Classification of Action Units in Facial Expression,” J. Robotics and Autonomous System, vol. 31, pp. 131-146, 2000.
[25] B. Lucas and T. Kanade, “An Interative Image Registration Technique with an Application in Stereo Vision,” Proc. Seventh Int'l Joint Conf. Artificial Intelligence, pp. 674-679, 1981.
[26] K. Mase, “Recognition of Facial Expression from Optical Flow,” IEICE Trans., vol. E74, no. 10, pp. 3474-3483, Oct. 1991.
[27] R.R. Rao, “Audio-Visual Interaction in Multimedia,” PhD thesis, Electrical Eng., Georgia Inst. of Tech nology, 1998.
[28] M. Rosenblum, Y. Yacoob, and L.S. Davis, “Human Expression Recognition from Motion Using a Radial Basis Function Network Archtecture,” IEEE Trans. Neural Network, vol. 7, no. 5, pp. 1121-1138, 1996.
[29] H. Rowley, S. Baluja, and T. Kanade, "Neural Network-Based Face Detection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, Jan. 1998, pp. 23-38.
[30] K. Scherer and P. Ekman, Handbook of Methods in Nonverbal Behavior Research. Cambridge, UK: Cambridge Univ. Press, 1982.
[31] M. Suwa, N. Sugie, and K. Fujimora, “A Preliminary Note on Pattern Recognition of Human Emotional Expression,” Proc. Int'l Joint Conf. Pattern Recognition, pp. 408-410, 1978.
[32] S. Budkowski, “Estelle Development Toolset (EDT),” Computer Networks and ISDN Systems, vol. 25, no. 1, pp. 63–82, 1992.
[33] Y. Tian, T. Kanade, and J. Cohn, “Robust Lip Tracking by Combining Shape, Color, and Motion,” Proc. Asian Conf. Computer Vision, pp. 1040-1045, 2000.
[34] Y. Tian, T. Kanade, and J. Cohn, “Dual-State Parametric Eye Tracking,” Proc. Int'l Conf. Face and Gesture Recognition, pp. 110-115, Mar. 2000.
[35] M. Turk and A. Pentland, "Face Recognition Using Eigenfaces," Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1991, pp. 586-591.
[36] Y. Yacoob and M.J. Black, “Parameterized Modeling and Recognition of Activities,” Proc. of the Sixth Int'l Conf. Computer Vision, pp. 120-127, 1998.
[37] Y. Yacoob and L.S. Davis, “Recognizing Human Facial Expression from Long Image Sequences Using Optical Flow,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636-642, June 1996.
[38] Y. Yacoob, H. Lam, and L. Davis, “Recognizing Face Showing Expressions,” Proc. Int'l Workshop Automatic Face and Gesture Recognition, 1995.
[39] A.L. Yuille, P.W. Hallinan, and D.S. Cohen, "Feature extraction from faces using deformable templates," Int'l J. Computer Vision, vol. 8, no. 2, 133-144, 1992.
[40] Z. Zhang, “Feature-Based Facial Expression Recognition: Sensitivity Analysis and Experiments with a Multilayer Perceptron,” Int'l J. Pattern Recognition and Artificial Intelligence, vol. 13, no. 6, pp. 893-911, 1999.

Index Terms:
Computer vision, multistate face and facial component models, facial expression analysis, facial action coding system, action units, AU combinations, neural network.
Citation:
Ying-li Tian, Takeo Kanade, Jeffrey F. Cohn, "Recognizing Action Units for Facial Expression Analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97-115, Feb. 2001, doi:10.1109/34.908962
Usage of this product signifies your acceptance of the Terms of Use.