Issue No. 04 - Oct.-Dec. (2014 vol. 5)
Albert C. Cruz , Department of Computer Science, California State University, Bakersfield, Science Building III 340, Bakersfield, CA
Bir Bhanu , Center for Research in Intelligent Systems, University of California, Riverside, Winston Chung Hall 216, Riverside, CA
Ninad S. Thakoor , Center for Research in Intelligent Systems, University of California, Riverside, Winston Chung Hall 216, Riverside, CA
Affective computing—the emergent field in which computers detect emotions and project appropriate expressions of their own—has reached a bottleneck where algorithms are not able to infer a person’s emotions from natural and spontaneous facial expressions captured in video. While the field of emotion recognition has seen many advances in the past decade, a facial emotion recognition approach has not yet been revealed which performs well in unconstrained settings. In this paper, we propose a principled method which addresses the temporal dynamics of facial emotions and expressions in video with a sampling approach inspired from human perceptual psychology. We test the efficacy of the method on the Audio/Visual Emotion Challenge 2011 and 2012, Cohn-Kanade and the MMI Facial Expression Database. The method shows an average improvement of 9.8 percent over the baseline for weighted accuracy on the Audio/Visual Emotion Challenge 2011 video-based frame-level subchallenge testing set.
Support vector machines, Emotion recognition, Visualization, Hidden Markov models, Feature extraction, Time-frequency analysis, Optical imaging
A. C. Cruz, B. Bhanu and N. S. Thakoor, "Vision and Attention Theory Based Sampling for Continuous Facial Emotion Recognition," in IEEE Transactions on Affective Computing, vol. 5, no. 4, pp. 418-431, 2014.