The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - Fourth Quarter (2012 vol.3)
pp: 456-468
Daniel McDuff , Massachusetts Institute of Technology, Cambridge
Rana El Kaliouby , Massachusetts Institute of Technology, Cambridge
Rosalind W. Picard , Massachusetts Institute of Technology, Cambridge
ABSTRACT
We present results validating a novel framework for collecting and analyzing facial responses to media content over the Internet. This system allowed 3,268 trackable face videos to be collected and analyzed in under two months. We characterize the data and present analysis of the smile responses of viewers to three commercials. We compare statistics from this corpus to those from the Cohn-Kanade+ (CK+) and MMI databases and show that distributions of position, scale, pose, movement, and luminance of the facial region are significantly different from those represented in these traditionally used datasets. Next, we analyze the intensity and dynamics of smile responses, and show that there are significantly different facial responses from subgroups who report liking the commercials compared to those that report not liking the commercials. Similarly, we unveil significant differences between groups who were previously familiar with a commercial and those that were not and propose a link to virality. Finally, we present relationships between head movement and facial behavior that were observed within the data. The framework, data collected, and analysis demonstrate an ecologically valid method for unobtrusive evaluation of facial responses to media content that is robust to challenging real-world conditions and requires no explicit recruitment or compensation of participants.
INDEX TERMS
Videos, Internet, Advertising, Content awareness, Human factors, Media, Ethics, Face recognition, market research, Crowdsourcing, facial expressions, nonverbal behavior, advertising
CITATION
Daniel McDuff, Rana El Kaliouby, Rosalind W. Picard, "Crowdsourcing Facial Responses to Online Videos", IEEE Transactions on Affective Computing, vol.3, no. 4, pp. 456-468, Fourth Quarter 2012, doi:10.1109/T-AFFC.2012.19
REFERENCES
[1] P. Ekman, W. Freisen, and S. Ancoli, "Facial Signs of Emotional Experience," J. Personality and Social Psychology, vol. 39, no. 6, pp. 1125-1134, 1980.
[2] R. Hazlett and S. Hazlett, "Emotional Response to Television Commercials: Facial Emg vs. Self-Report," J. Advertising Research, vol. 39, pp. 7-24, 1999.
[3] A. Quinn and B. Bederson, "Human Computation: A Survey and Taxonomy of a Growing Field," Proc. Ann. Conf. Human Factors in Computing Systems, pp. 1403-1412, 2011.
[4] G. Taylor, I. Spiro, C. Bregler, and R. Fergus, "Learning Invariance through Imitation," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2011.
[5] R. Batra and M. Ray, "Affective Responses Mediating Acceptance of Advertising," J. Consumer Research, vol. 13, pp. 234-249, 1986.
[6] T. Teixeira, M. Wedel, and R. Pieters, "Emotion-Induced Engagement in Internet Video Ads," J. Marketing Research, vol. 49, no. 2, pp. 144-159, 2010.
[7] M.E. Hoque and R. Picard, "Acted vs. Natural Frustration and Delight: Many People Smile in Natural Frustration," Proc. IEEE Int'l Conf. Automatic Face & Gesture Recognition and Workshops, 2011.
[8] H. Gunes, M. Piccardi, and M. Pantic, "From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities," Affective Computing: Focus on Emotion Expression, Synthesis, and Recognition, pp. 185-218, I-Tech Education and Publishing, 2008.
[9] A. Fridlund, "Sociality of Solitary Smiling: Potentiation by an Implicit Audience," J. Personality and Social Psychology, vol. 60, no. 2, pp. 229-240, 1991.
[10] T. Teixeira, M. Wedel, and R. Pieters, "Moment-to-Moment Optimal Branding in TV Commercials: Preventing Avoidance by Pulsing," Marketing Science, vol. 29, no. 5, pp. 783-804, 2010.
[11] N. Schwarz and F. Strack, "Reports of Subjective Well-Being: Judgmental Processes and Their Methodological Implications," Well-Being: The Foundations of Hedonic Psychology, pp. 61-84, Russell Sage Foundation, 1999.
[12] M. Lieberman, N. Eisenberger, M. Crockett, S. Tom, J. Pfeifer, and B. Way, "Putting Feelings into Words," Psychological Science, vol. 18, no. 5, pp. 421-428, 2007.
[13] J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, and J. Movellan, "Toward Practical Smile Detection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp. 2106-2111, Nov. 2009.
[14] Z. Ambadar, J. Cohn, and L. Reed, "All Smiles Are Not Created Equal: Morphology and Timing of Smiles Perceived as Amused, Polite, and Embarrassed/Nervous," J. Nonverbal Behavior, vol. 33, no. 1, pp. 17-34, 2009.
[15] P. Ekman and W. Friesen, Facial Action Coding System. Consulting Psychologists Press, 1977.
[16] Z. Zeng, M. Pantic, G. Roisman, and T. Huang, "A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 39-58, Jan. 2009.
[17] P. Lucey, J. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-Specified Expression," Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, pp. 94-101, 2010.
[18] M.S. Bartlett, G. Littlewort, I. Fasel, and J.R. Movellan, "Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction," Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshop, vol. 5, p. 53, 2003.
[19] I. Cohen, N. Sebe, A. Garg, L. Chen, and T. Huang, "Facial Expression Recognition from Video Sequences: Temporal and Static Modeling," Computer Vision and Image Understanding, vol. 91, nos. 1/2, pp. 160-187, 2003.
[20] J. Cohn, L. Reed, Z. Ambadar, J. Xiao, and T. Moriyama, "Automatic Analysis and Recognition of Brow Actions and Head Motion in Spontaneous Facial Behavior," Proc. IEEE Int'l Conf. Systems, Man, and Cybernetics, vol. 1, pp. 610-616, 2004.
[21] G. Littlewort, M. Bartlett, I. Fasel, J. Chenu, and J. Movellan, "Analysis of Machine Learning Methods for Real-Time Recognition of Facial Expressions from Video," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004.
[22] P. Michel and R. El Kaliouby, "Real Time Facial Expression Recognition in Video Using Support Vector Machines," Proc. Fifth Int'l Conf. Multimodal Interfaces, pp. 258-264, 2003.
[23] M. Pantic, M. Valstar, R. Rademaker, and L. Maat, "Web-Based Database for Facial Expression Analysis," Proc. IEEE Int'l Conf. Multimedia and Expo, p. 5, 2005.
[24] G. Mckeown, M. Valstar, R. Cowie, and M. Pantic, "The Semaine Corpus of Emotionally Coloured Character Interactions," Proc. IEEE Int'l Conf. Multimedia and Expo, pp. 1079-1084, July 2010.
[25] M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, "Automatic Recognition of Facial Actions in Spontaneous Expressions," J. Multimedia, vol. 1, no. 6, pp. 22-35, 2006.
[26] E. Douglas-Cowie et al., "The Humaine Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data," Proc. Second Int'l Conf. Affective Computing and Intelligent Interaction, pp. 488-500, 2007.
[27] P. Lucey, J. Cohn, K. Prkachin, P. Solomon, and I. Matthews, "Painful Data: The UnBC-McMaster Shoulder Pain Expression Archive Database," Proc. IEEE Int'l Conf. Automatic Face & Gesture Recognition and Workshops, pp. 57-64, 2011.
[28] D. McDuff, R. El Kaliouby, K. Kassam, and R. Picard, "Affect Valence Inference from Facial Action Unit Spectrograms," Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, pp. 17-24, 2011.
[29] H. Joho, J. Staiano, N. Sebe, and J. Jose, "Looking at the Viewer: Analysing Facial Activity to Detect Personal Highlights of Multimedia Contents," Multimedia Tools and Applications, pp. 1-19, 2011.
[30] J. Berger and K. Milkman, "What Makes Online Content Viral?" J. Marketing Research, vol. 49, pp. 192-205, 2012.
[31] T. Ambler and T. Burne, "The Impact of Affect on Memory of Advertising," J. Advertising Research, vol. 39, pp. 25-34, 1999.
[32] A. Mehta and S. Purvis, "Reconsidering Recall and Emotion in Advertising," J. Advertising Research, vol. 46, no. 1, pp. 49-56, 2006.
[33] R. Haley, "The Arf Copy Research Validity Project: Final Report," Proc. Seventh Ann. ARF Copy Research Workshop, 1990.
[34] E. Smit, L. Van Meurs, and P. Neijens, "Effects of Advertising Likeability: A 10-Year Perspective," J. Advertising Research, vol. 46, no. 1, pp. 73-83, 2006.
[35] K. Poels and S. Dewitte, "How to Capture the Heart? Reviewing 20 Years of Emotion Measurement in Advertising," J. Advertising Research, vol. 46, no. 1, p. 18, 2006.
[36] J. Howe, Crowdsourcing: How the Power of the Crowd Is Driving the Future of Business. Century, 2008.
[37] R. Morris, "Crowdsourcing Workshop: The Emergence of Affective Crowdsourcing," Proc. Ann. Conf. Extended Abstracts on Human Factors in Computing Systems, 2011.
[38] Web address of data collection site: http://www.forbes.com/2011/02/28detect-smile-webcam-affectiva-mit-media-lab.html , 2012.
[39] D. Svantesson, "Geo-Location Technologies and Other Means of Placing Borders on the 'Borderless' Internet," John Marshall J. Computer & Information Law, vol. 23, pp. 101-845, 2004.
[40] D. McDuff, R. El Kaliouby, and R. Picard, "Crowdsourced Data Collection of Facial Responses," Proc. 13th Int'l Conf. Multimodal Interaction, 2011.
[41] T. Ojala, M. Pietikainen, and T. Maenpaa, "Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, July 2002.
[42] A. Biel, "Love the Ad. Buy the Product?" Admap, vol. 26, pp. 21-25, Sept. 1990.
39 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool