Search For:

Displaying 1-6 out of 6 total
Measuring Voter's Candidate Preference Based on Affective Responses to Election Debates
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Daniel McDuff,Rana El Kaliouby,Evan Kodra,Rosalind Picard
Issue Date:September 2013
pp. 369-374
In this paper we present the first analysis of facial responses to electoral debates measured automatically over the Internet. We show that significantly different responses can be detected from viewers with different political preferences and that similar...
Crowdsourcing Facial Responses to Online Videos
Found in: IEEE Transactions on Affective Computing
By Daniel McDuff,Rana El Kaliouby,Rosalind W. Picard
Issue Date:September 2012
pp. 456-468
We present results validating a novel framework for collecting and analyzing facial responses to media content over the Internet. This system allowed 3,268 trackable face videos to be collected and analyzed in under two months. We characterize the data and...
Predicting online media effectiveness based on smile responses gathered over the Internet
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Daniel McDuff,Rana el Kaliouby,David Demirdjian,Rosalind Picard
Issue Date:April 2013
pp. 1-7
We present an automated method for classifying “liking” and “desire to view again” based on over 1,500 facial responses to media collected over the Internet. This is a very challenging pattern recognition problem that involves robust detection of smile int...
From dials to facial coding: Automated detection of spontaneous facial expressions for media research
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Evan Kodra,Thibaud Senechal,Daniel McDuff,Rana el Kaliouby
Issue Date:April 2013
pp. 1-6
Typical consumer media research requires the recruitment and coordination of hundreds of panelists and the use of relatively expensive equipment. In this work, we compare results from a legacy hardware dial mechanism for measuring media preference to those...
Towards sensing the influence of visual narratives on human affect
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Alexis Narvaez, Daniel McDuff, Louis-Philippe Morency, Mihai Burzo, Rada Mihalcea, Veronica Perez-Rosas
Issue Date:October 2012
pp. 153-160
In this paper, we explore a multimodal approach to sensing affective state during exposure to visual narratives. Using four different modalities, consisting of visual facial behaviors, thermal imaging, heart rate measurements, and verbal descriptions, we s...
Crowdsourced data collection of facial responses
Found in: Proceedings of the 13th international conference on multimodal interfaces (ICMI '11)
By Daniel McDuff, Rana el Kaliouby, Rosalind Picard
Issue Date:November 2011
pp. 11-18
In the past, collecting data to train facial expression and affect recognition systems has been time consuming and often led to data that do not include spontaneous expressions. We present the first crowdsourced data collection of dynamic, natural and spon...