The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - April-June (2013 vol.4)
pp: 127-141
Yongqiang Li , Harbin Institute of Technology, Harbin
Jixu Chen , GE Global Research Center, Niskayuna
Yongping Zhao , Harbin Institute of Technology, Harbin
Qiang Ji , Rensselaer Polytechnic Institute, Troy
ABSTRACT
Facial action recognition is concerned with recognizing the local facial motions from image or video. In recent years, besides the development of facial feature extraction techniques and classification techniques, prior models have been introduced to capture the dynamic and semantic relationships among facial action units. Previous works have shown that combining the prior models with the image measurements can yield improved performance in AU recognition. Most of these prior models, however, are learned from data, and their performance hence largely depends on both the quality and quantity of the training data. These data-trained prior models cannot generalize well to new databases, where the learned AU relationships are not present. To alleviate this problem, we propose a knowledge-driven prior model for AU recognition, which is learned exclusively from the generic domain knowledge that governs AU behaviors, and no training data are used. Experimental results show that, with no training data but generic domain knowledge, the proposed knowledge-driven model achieves comparable results to the data-driven model for specific database and significantly outperforms the data-driven models when generalizing to new data set.
INDEX TERMS
Gold, Data models, Hidden Markov models, Image recognition, Face recognition, Training data, Computational modeling, knowledge-driven model, Facial action units recognition, Bayesian networks
CITATION
Yongqiang Li, Jixu Chen, Yongping Zhao, Qiang Ji, "Data-Free Prior Model for Facial Action Unit Recognition", IEEE Transactions on Affective Computing, vol.4, no. 2, pp. 127-141, April-June 2013, doi:10.1109/T-AFFC.2013.5
REFERENCES
[1] J. Whitehill and C.W. Omlin, "Haar Features for FACS AU Recognition," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 217-222, 2006.
[2] Y. Chang, C. Hu, and M. Turk, "Probabilistic Expression Analysis on Manifolds," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2004.
[3] A. Torralba and A.A. Efros, "Unbiased Look at Dataset Bias," Proc. IEEE Int'l Conf. Computer Vision Pattern and Recognition, 2011.
[4] M. Valstar and M. Pantic, "Combined Support Vector Machines and Hidden Markov Models for Modeling Facial Action Temporal Dynamics," Proc. IEEE Int'l Conf. Human-Computer Interaction, pp. 118-127, 2007.
[5] R.S. Niculescu, T. Mitchell, and R.B. Rao, "Bayesian Network Learning with Parameter Constraints," J. Machine Learning Research, vol. 7, pp. 1357-1383, 2006.
[6] Y. Mao and G. Lebanon, "Domain Knowledge Uncertainty and Probabilistic Parameter Constraints," Proc. 25th Conf. Uncertainty in Artificial Intelligence, 2009.
[7] Y. Tong and Q. Ji, "Learning Bayesian Networks with Qualitative Constraints," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2008.
[8] W. Liao and Q. Ji, "Learning Bayesian Network Parameters under Incomplete Data with Qualitative Domain Knowledge," Pattern Recognition, vol. 42, pp. 3046-3056, 2009.
[9] Y. Tong, J. Chen, and Q. Ji, "A Unified Probabilistic Framework for Spontaneous Facial Activity Modeling and Understanding," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 258-273, 2010.
[10] Y. Tong, W. Liao, and Q. Ji, "Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1683-1699, Oct. 2007.
[11] M.H. Mahoor, M. Zhou, K.L. Veon, S. Mavadati, and J. Cohn, "Facial Action Unit Recognition with Sparse Representation," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, 2011.
[12] N. Dalal and B. Triggs, "Histogram of Oriented Gradients for Human Detection," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2005.
[13] C.P. de Campos and Q. Ji, "Constraints on Priors and Estimations for Learning Bayesian Network Parameters," Proc. 19th Int'l Conf. Pattern Recognition, 2008.
[14] J.J.J. Lien, T. Kanade, J.F. Cohn, and C.C. Li, "Detection, Tracking, and Classification of Action Units in Facial Expression," Robotics and Autonomous Systems, vol. 31, pp. 131-146, 2000.
[15] M.S. Bartlett, G. Littlewort, M.G. Frank, C. Lainscsek, I. Fasel, and J.R. Movellan, "Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2005.
[16] Y.L. Tian, T. Kanade, and J.F. Cohn, "Recognizing Action Units for Facial Expression Analysis," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97-115, Feb. 2001.
[17] Y.L. Tian, T. Kanade, and J.F. Cohn, "Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 218-223, 2002.
[18] P. Ekman and W.V. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, 1978.
[19] T. Kanade, J.F. Cohn, and Y.L. Tian, "Comprehensive Database for Facial Expression Analysis," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 46-53, 2000.
[20] C.P. Robert and G. Casella, Monte Carlo Statistical Methods. Consulting Psychologists Press, 1999.
[21] M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, "Fully Automatic Facial Action Recognition in Spontaneous Behavior," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 223-230, 2006.
[22] G. Littlewort, M. Bartlett, I. Fasel, J. Susskind, and J. Movellan, "Dynamics of Facial Expression Extracted Automatically from Video," Image and Vision Computing, vol. 24, pp. 615-625, 2006.
[23] K.L. Schmidt and J.F. Cohn, "Dynamic of Facial Expression: Normative Characteristics and Individual Differences," Proc. IEEE Int'l Conf. Multimedia and Expo, 2001.
[24] T. Kanade, J.F. Cohn, and Y.L. Tian, "Comprehensive Database for Facial Expression Analysis," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 46-53, 2000.
[25] J. Pearl, Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
[26] K.B. Korb and A.E. Nicholson, Bayesian Artificial Intelligence. Chapman and Hall/CRC, 2004.
[27] K. Scherer and P. Ekman, Handbook of Methods in Nonverbal Behavior Research. Cambridge Univ. Press, 1982.
[28] Social Signal Processing Network, "GEMEP-FERA," http://sspnet.eu/2011/05gemep-fera/, 2013.
[29] P. Ekman, W.V. Friesen, and J.C. Hager, Facial Action Coding System: The Manual. Network Information Research Corp., 2002.
[30] Y.L. Tian, T. Kanade, and J.F. Cohn, "Facial Expression Analysis," Handbook of Face Recognition, S.Z. Li and A.K. Jain, eds., pp. 247-276, Springer, 2005.
[31] S. Lucey, A. Ashraf, and J.F. Cohn, "Investigating Spontaneous Facial Action Recognition through AAM Representations of the Face," Face Recognition, K. Delac and M. Grgic, eds., pp. 275-286, InTech Education and Publishing, 2007.
[32] S. Koelstra, M. Pantic, and I. Patras, "A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 11, pp. 1940-1954, Nov. 2010.
[33] M. Valstar, B. Jiang, M. Mehu, M. Pantic, and K. Scherer, "The First Facial Expression Recognition and Analysis Challenge," Proc. Automatic Face and Gesture Recognition and Workshops, 2011.
[34] M. Valstar and M. Pantic, "Fully Automatic Facial Action Unit Detection and Temporal Analysis," Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 3, no. 149, 2006.
[35] M. Valstar and M. Pantic, "Fully Automatic Recognition of the Temporal Phases of Facial Actions," IEEE Trans. Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 1, pp. 28-43, Feb. 2012.
[36] M. Valstar, M. Pantic, and I. Patras, "Motion History for Facial Action Detection from Face Video," Proc. IEEE Conf. Systems, Man, and Cybernetics, pp. 635-640, 2004.
[37] G. Zhao and M. Pietikainen, "Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 915-928, June 2007.
[38] Y. Wang, H. Ai, B. Wu, and C. Huang, "Real Time Facial Expression Recognition with AdaBoost," Proc. 17th Int'l Conf. Pattern Recognition, 2004.
[39] C. Shan, S. Gong, and P.W. McOwan, "Facial Expression Recognition Based on Local Binary Patterns: A Comprehensive Study," Image and Vision Computing, vol. 27, pp. 803-816, 2009.
[40] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, "A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 39-58, Jan. 2009.
[41] I. Cohen, N. Sebe, F. Cozman, M. Cirelo, and T. Huang, "Learning Bayesian Network Classifiers for Facial Expression Recognition Both Labeled and Unlabeled Data," Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 595-601, 2003.
[42] S. Gokturk, J. Bouguet, C. Tomasi, and B. Girod, "Model-Based Face Tracking for View Independent Facial Expression Recognition," Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 272-278, 2002.
[43] P. Wang and Q. Ji, "Learning Discriminant Features for Multi-View Face and Eye Detection," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 373-379, 2005.
[44] Nucleus Medical Media, "Muscles of the Face---Medical Illustration, Human Anatomy Drawing," http://catalog.nucleusinc.comgenerateexhibit.php?ID=9300 , 2013.
[45] D.S. Messinger, W.I. Mattson, M.H. Mahoor, and J.F. Cohn, "The Eyes Have It: Making Positive Expressions More Positive and Negative Expressions More Negative," Emotion, vol. 12, pp. 430-436, 2012.
[46] W. Hoeffding, "Probability Inequalities for Sums of Bounded Random Variables," J. Am. Statistical Assoc., vol. 58, pp. 13-30, 1963.
[47] T. Bänziger and K.R. Scherer, "Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) Corpus," Blueprint for Affective Computing: A Sourcebook, K.R. Scherer, T. Bänziger, and E.B. Roesch, eds., chapter 6.1, pp. 271-294, Oxford Univ. Press, 2010.
[48] Z. Ambadar, J.F. Cohn, and L.I. Reed, "All Smiles Are Not Created Equal: Morphology and Timing of Smiles Perceived as Amused, Polite, and Embarrassed/Nervous," J. Nonverbal Behavior, vol. 33, no. 1, pp. 17-34, 2009.
[49] D. Keltner and B.N. Buswell, "Embarrassment: Its Distinct Form and Appeasement Functions," Psychological Bull., vol. 122, no. 3, pp. 250-270, 1997.
506 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool