This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Data-Free Prior Model for Facial Action Unit Recognition
April-June 2013 (vol. 4 no. 2)
pp. 127-141
Yongqiang Li, Harbin Institute of Technology, Harbin
Jixu Chen, GE Global Research Center, Niskayuna
Yongping Zhao, Harbin Institute of Technology, Harbin
Qiang Ji, Rensselaer Polytechnic Institute, Troy
Facial action recognition is concerned with recognizing the local facial motions from image or video. In recent years, besides the development of facial feature extraction techniques and classification techniques, prior models have been introduced to capture the dynamic and semantic relationships among facial action units. Previous works have shown that combining the prior models with the image measurements can yield improved performance in AU recognition. Most of these prior models, however, are learned from data, and their performance hence largely depends on both the quality and quantity of the training data. These data-trained prior models cannot generalize well to new databases, where the learned AU relationships are not present. To alleviate this problem, we propose a knowledge-driven prior model for AU recognition, which is learned exclusively from the generic domain knowledge that governs AU behaviors, and no training data are used. Experimental results show that, with no training data but generic domain knowledge, the proposed knowledge-driven model achieves comparable results to the data-driven model for specific database and significantly outperforms the data-driven models when generalizing to new data set.
Index Terms:
Gold,Data models,Hidden Markov models,Image recognition,Face recognition,Training data,Computational modeling,knowledge-driven model,Facial action units recognition,Bayesian networks
Citation:
Yongqiang Li, Jixu Chen, Yongping Zhao, Qiang Ji, "Data-Free Prior Model for Facial Action Unit Recognition," IEEE Transactions on Affective Computing, vol. 4, no. 2, pp. 127-141, April-June 2013, doi:10.1109/T-AFFC.2013.5
Usage of this product signifies your acceptance of the Terms of Use.