This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Facial Expression Recognition Using Facial Movement Features
October-December 2011 (vol. 2 no. 4)
pp. 219-229
Ligang Zhang, Queensland University of Technology, Brisbane
Dian Tjondronegoro, Queensland University of Technology, Brisbane
Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using "salient” distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the "salient” patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. Comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.

[1] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, “A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 39-58, Jan. 2009.
[2] T. Yan, C. Jixu, and J. Qiang, “A Unified Probabilistic Framework for Spontaneous Facial Action Modeling and Understanding,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 258-273, Feb. 2010.
[3] P.S. Aleksic and A.K. Katsaggelos, “Automatic Facial Expression Recognition Using Facial Animation Parameters and Multistream HMMs,” IEEE Trans. Information Forensics and Security, vol. 1, no. 1, pp. 3-11, Mar. 2006.
[4] D. Hamdi, V. Roberto, S.A. Ali, and G. Theo, “Eyes Do Not Lie: Spontaneous versus Posed Smiles,” Proc. Int'l Conf. Multimedia, pp. 703-706, 2010.
[5] T.-H. Wang and J.-J.J. Lien, “Facial Expression Recognition System Based on Rigid and Non-Rigid Motion Separation and 3D Pose Estimation,” J. Pattern Recognition, vol. 42, no. 5, pp. 962-977, 2009.
[6] M. Yeasin, B. Bullot, and R. Sharma, “Recognition of Facial Expressions and Measurement of Levels of Interest from Video,” IEEE Trans. Multimedia, vol. 8, no. 3, pp. 500-508, June 2006.
[7] Y. Cheon and D. Kim, “Natural Facial Expression Recognition Using Differential-AAM and Manifold Learning,” Pattern Recognition, vol. 42, pp. 1340-1350, 2009.
[8] F. Tsalakanidou and S. Malassiotis, “Real-Time 2D+3D Facial Action and Expression Recognition,” Pattern Recognition, vol. 43, pp. 1763-1775, 2010.
[9] I. Cohen, N. Sebe, A. Garg, L.S. Chen, and T.S. Huang, “Facial Expression Recognition from Video Sequences: Temporal and Static Modeling,” Computer Vision and Image Understanding, vol. 91, pp. 160-187, 2003.
[10] G. Zhao and M. Pietikainen, “Boosted Multi-Resolution Spatiotemporal Descriptors for Facial Expression Recognition,” Pattern Recognition Letters, vol. 30, pp. 1117-1127, 2009.
[11] F. Dornaika and F. Davoine, “Simultaneous Facial Action Tracking and Expression Recognition in the Presence of Head Motion,” Int'l J. Computer Vision, vol. 76, pp. 257-281, 2008.
[12] A. Kapoor, W. Burleson, and R.W. Picard, “Automatic Prediction of Frustration,” Int'l J. Human-Computer Studies, vol. 65, pp. 724-736, 2007.
[13] L. Peng and S.J.D. Prince, “Joint and Implicit Registration for Face Recognition,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1510-1517, 2009.
[14] T. Huang, A. Nijholt, M. Pantic, and A. Pentland, “Human Computing and Machine Understanding of Human Behavior: A Survey,” Artifical Intelligence for Human Computing, vol. 4451, pp. 47-71, 2007.
[15] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio, “Robust Object Recognition with Cortex-Like Mechanisms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 3, pp. 411-426, Mar. 2007.
[16] J. Mutch and D.G. Lowe, “Multiclass Object Recognition with Sparse, Localized Features,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 11-18, 2006.
[17] H. Jhuang, T. Serre, L. Wolf, and T. Poggio, “A Biologically Inspired System for Action Recognition,” Proc. IEEE 11th Int'l Conf. Computer Vision, pp. 1-8, 2007.
[18] L. Zhen, L. Shengcai, H. Ran, M. Pietikainen, and S.Z. Li, “Gabor Volume Based Local Binary Pattern for Face Representation and Recognition,” Proc. IEEE Eighth Int'l Conf. Automatic Face and Gesture Recognition, pp. 1-6, 2008.
[19] L. Wiskott, J.M. Fellous, N. Kuiger, and C. von der Malsburg, “Face Recognition by Elastic Bunch Graph Matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 775-779, Jul. 1997.
[20] G. Littlewort, M.S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, “Dynamics of Facial Expression Extracted Automatically from Video,” Image and Vision Computing, vol. 24, pp. 615-625, 2006.
[21] H.Y. Chen, C.L. Huang, and C.M. Fu, “Hybrid-Boost Learning for Multi-Pose Face Detection and Facial Expression Recognition,” Pattern Recognition, vol. 41, pp. 1173-1185, 2008.
[22] S. Hoch, F. Althoff, G. McGlaun, and G. Rigoll, “Bimodal Fusion of Emotional Data in an Automotive Environment,” Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing, pp. 1085-1088, 2005.
[23] G. Guo and C.R. Dyer, “Learning from Examples in the Small Sample Case: Face Expression Recognition,” IEEE Trans. Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 35, no. 3, pp. 477-488, June 2005.
[24] S. Zafeiriou and I. Pitas, “Discriminant Graph Structures for Facial Expression Recognition,” IEEE Multimedia, vol. 10, no. 8, pp. 1528-1540, Dec. 2008.
[25] T. Xiang, M.K.H. Leung, and S.Y. Cho, “Expression Recognition Using Fuzzy Spatio-Temporal Modeling,” Pattern Recognition, vol. 41, pp. 204-216, 2008.
[26] C. Shan, S. Gong, and P.W. McOwan, “Facial Expression Recognition Based on Local Binary Patterns: A Comprehensive Study,” Image and Vision Computing, vol. 27, pp. 803-816, 2009.
[27] Z. Guoying and M. Pietikainen, “Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 915-928, June 2007.
[28] P. Yang, Q. Liu, and D.N. Metaxas, “Boosting Encoded Dynamic Features for Facial Expression Recognition,” Pattern Recognition Letters, vol. 30, pp. 132-139, 2009.
[29] C. Orrite, A. Gañán, and G. Rogez, “HOG-Based Decision Tree for Facial Expression Classification,” Proc. Fourth Iberian Conf. Pattern Recognition and Image Analysis, pp. 176-183, 2009.
[30] S. Shiguang, G. Wen, C. Yizheng, C. Bo, and Y. Pang, “Review the Strength of Gabor Features for Face Recognition from the Angle of Its Robustness to Mis-Alignment,” Proc. 17th Int'l Conf. Pattern Recognition, vol. 1, pp. 338-341, 2004.
[31] D. Gabor, “Theory of Communication,” J. Institution of Electrical Engineers—Part III: Radio and Comm. Eng., vol. 93, pp. 429-441, 1946.
[32] C. Cortes and V. Vapnik, “Support-Vector Networks,” Machine Learning, vol. 20, pp. 273-297, 1995.
[33] C.C. Chang and C.J. Lin, “LIBSVM: A Library for Support Vector Machines, 2001,” http://www.csie.ntu.edu.tw/cjlinlibsvm, 2001.
[34] T. Serre, L. Wolf, and T. Poggio, “Object Recognition with Features Inspired by Visual Cortex,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 994-1000, 2005.
[35] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding Facial Expressions with Gabor Wavelets,” Proc. IEEE Third Int'l Conf. Automatic Face and Gesture Recognition, pp. 200-205, 1998.
[36] T. Kanade, J.F. Cohn, and T. Yingli, “Comprehensive Database for Facial Expression Analysis,” Proc. IEEE Fourth Int'l Conf. Automatic Face and Gesture Recognition, pp. 46-53, 2000.
[37] Y. Freund and R.E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” J. Computer and System Sciences, vol. 55, no. 1, pp. 119-139, 1997.
[38] M.S. Bartlett, G. Littlewort, I. Fasel, and J.R. Movellan, “Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction,” Proc. Computer Vision and Pattern Recognition Workshop, p. 53, 2003.
[39] P. Viola and M.J. Jones, “Robust Real-Time Face Detection,” Int'l J. Computer Vision, vol. 57, pp. 137-154, 2004.
[40] M. Kyperountas, A. Tefas, and I. Pitas, “Salient Feature and Reliable Classifier Selection for Facial Expression Classification,” Pattern Recognition, vol. 43, pp. 972-986, 2010.
[41] C. Zhengdong, S. Bin, F. Xiang, and Z. Yu-Jin, “Automatic Coefficient Selection in Weighted Maximum Margin Criterion,” Proc. 19th Int'l Conf. Pattern Recognition, pp. 1-4, 2008.
[42] W. Yuwen, L. Hong, and Z. Hongbin, “Modeling Facial Expression Space for Recognition,” Proc. IEEE/RSJ Int'l Conf. Intelligent Robots and Systems, pp. 1968-1973, 2005.
[43] Z. Wenming, Z. Xiaoyan, Z. Cairong, and Z. Li, “Facial Expression Recognition Using Kernel Canonical Correlation Analysis (KCCA),” IEEE Trans. Neural Networks, vol. 17, no. 1, pp. 233-238, Jan. 2006.
[44] Y. Horikawa, “Facial Expression Recognition Using KCCA with Combining Correlation Kernels and Kansei Information,” Proc. Int'l Conf. Computational Science and Its Applications, pp. 489-498, 2007.
[45] J. Bin, Y. Guo-Sheng, and Z. Huan-Long, “Comparative Study of Dimension Reduction and Recognition Algorithms of DCT and 2DPCA,” Proc. Int'l Conf. Machine Learning and Cybernetics, pp. 407-410, 2008.
[46] J.-J. Wong and S.-Y. Cho, “A Face Emotion Tree Structure Representation with Probabilistic Recursive Neural Network Modeling,” Neural Computing and Applications, vol. 19, pp. 33-54, 2010.
[47] K. Schindler and L. van Gool, “Action Snippets: How Many Frames Does Human Action Recognition Require?” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[48] A. Hadid and M. Pietikainen, “Combining Appearance and Motion for Face and Gender Recognition from Videos,” Pattern Recognition, vol. 42, pp. 2818-2827, 2009.

Index Terms:
Facial expression analysis, feature evaluation and selection, computer vision, Gabor filter, Adaboost.
Citation:
Ligang Zhang, Dian Tjondronegoro, "Facial Expression Recognition Using Facial Movement Features," IEEE Transactions on Affective Computing, vol. 2, no. 4, pp. 219-229, Oct.-Dec. 2011, doi:10.1109/T-AFFC.2011.13
Usage of this product signifies your acceptance of the Terms of Use.