The Community for Technology Leaders
RSS Icon
Subscribe
pp: 1
ABSTRACT
Affective computing—the emergent field in which com puters detect emotions and project appropriate expressions of their own—has reached a bottleneck where algorithms are not able to infer a person’s emotions from natural and spontaneous facial expressions captured in video. While the field of emotion recognition has seen many advances in the past decade, a facial emotion recognition approach has not yet been revealed which performs well in unconstrained settings. In this paper, we propose a principled method which addresses the temporal dynamics of facial emotions and expressions in video with a sampling approach inspired from human perceptual psychology. We test the efficacy of the method on the Audio/Visual Emotion Challenge 2011 and 2012, Cohn-Kanade and the MMI Facial Expression Database. The method shows an average improvement of 9.8% over the baseline for weighted accuracy on the Audio/Visual Emotion Challenge 2011 video-based frame level subchallenge testing set.
CITATION
Ninad Thakoor, "Vision and Attention Theory Based Sampling for Continuous Facial Emotion Recognition", IEEE Transactions on Affective Computing, , no. 1, pp. 1, PrePrints PrePrints, doi:10.1109/TAFFC.2014.2316151
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool