The Community for Technology Leaders
Green Image
Issue No. 04 - Fourth Quarter (2012 vol. 3)
ISSN: 1949-3045
pp: 456-468
Rana El Kaliouby , Massachusetts Institute of Technology, Cambridge
Daniel McDuff , Massachusetts Institute of Technology, Cambridge
Rosalind W. Picard , Massachusetts Institute of Technology, Cambridge
ABSTRACT
We present results validating a novel framework for collecting and analyzing facial responses to media content over the Internet. This system allowed 3,268 trackable face videos to be collected and analyzed in under two months. We characterize the data and present analysis of the smile responses of viewers to three commercials. We compare statistics from this corpus to those from the Cohn-Kanade+ (CK+) and MMI databases and show that distributions of position, scale, pose, movement, and luminance of the facial region are significantly different from those represented in these traditionally used datasets. Next, we analyze the intensity and dynamics of smile responses, and show that there are significantly different facial responses from subgroups who report liking the commercials compared to those that report not liking the commercials. Similarly, we unveil significant differences between groups who were previously familiar with a commercial and those that were not and propose a link to virality. Finally, we present relationships between head movement and facial behavior that were observed within the data. The framework, data collected, and analysis demonstrate an ecologically valid method for unobtrusive evaluation of facial responses to media content that is robust to challenging real-world conditions and requires no explicit recruitment or compensation of participants.
INDEX TERMS
Videos, Internet, Advertising, Content awareness, Human factors, Media, Ethics, Face recognition, market research, Crowdsourcing, facial expressions, nonverbal behavior, advertising
CITATION
Rana El Kaliouby, Daniel McDuff, Rosalind W. Picard, "Crowdsourcing Facial Responses to Online Videos", IEEE Transactions on Affective Computing, vol. 3, no. , pp. 456-468, Fourth Quarter 2012, doi:10.1109/T-AFFC.2012.19
117 ms
(Ver 3.1 (10032016))