Search For:

Displaying 1-37 out of 37 total
Filtered Component Analysis to Increase Robustness to Local Minima in Appearance Models
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Fernando De la Torre, Alvaro Collet, Manuel Quero, Jeffrey F. Cohn, Takeo Kanade
Issue Date:June 2007
pp. 1-8
Appearance Models (AM) are commonly used to model appearance and shape variation of objects in images. In particular, they have proven useful to detection, tracking, and synthesis of people's faces from video. While AM have numerous advantages relative to ...
 
Unsupervised discovery of facial events
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Feng Zhou, Fernando De la Torre, Jeffrey F. Cohn
Issue Date:June 2010
pp. 2574-2581
Automatic facial image analysis has been a long standing research problem in computer vision. A key component in facial image analysis, largely conditioning the success of subsequent algorithms (e.g. facial expression recognition), is to define a vocabular...
 
Facing Imbalanced Data--Recommendations for the Use of Performance Metrics
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Laszlo A. Jeni,Jeffrey F. Cohn,Fernando De La Torre
Issue Date:September 2013
pp. 245-251
Recognizing facial action units (AUs) is important for situation analysis and automated video annotation. Previous work has emphasized face tracking and registration and the choice of features classifiers. Relatively neglected is the effect of imbalanced d...
 
Head Movement Dynamics during Normal and Perturbed Parent-Infant Interaction
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Zakia Hammal,Jeffrey F. Cohn,Daniel S. Messinger,Whitney I. Mattson,Mohammad H. Mahoor
Issue Date:September 2013
pp. 276-282
We investigated the dynamics of head motion in parents and infants during an age-appropriate, well-validated emotion induction, the Face-to-Face/Still-Face procedure. Participants were 12 ethnically diverse 6-month-old infants and their mother or father. D...
 
Continuous AU intensity estimation using localized, sparse facial feature space
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Laszlo A. Jeni,Jeffrey M. Girard,Jeffrey F. Cohn,Fernando De La Torre
Issue Date:April 2013
pp. 1-7
Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of...
   
Social risk and depression: Evidence from manual and automatic facial expression analysis
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Jeffrey M. Girard,Jeffrey F. Cohn,Mohammad H. Mahoor,Seyedmohammad Mavadati,Dean P. Rosenwald
Issue Date:April 2013
pp. 1-8
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions...
   
Temporal Segmentation of Facial Behavior
Found in: Computer Vision, IEEE International Conference on
By Fernando De la Torre, Joan Campoy, Zara Ambadar, Jeffrey F. Cohn
Issue Date:October 2007
pp. 1-8
Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world settings is an important, unsolved, and relatively unexplored problem in facial image analysis. Several issues contribute to the challenge of this task. These in...
 
Individual Differences in Facial Expression: Stability over Time, Relation to Self-Reported Emotion, and Ability to Inform Person Identification
Found in: Multimodal Interfaces, IEEE International Conference on
By Jeffrey F. Cohn, Karen Schmidt, Ralph Gross, Paul Ekman
Issue Date:October 2002
pp. 491
The face can communicate varied personal information including subjective emotion, communicative intent, and cognitive appraisal. Accurate interpretation by observer or computer interface depends on attention to dynamic properties of the expression, contex...
 
Robust Full-Motion Recovery of Head by Dynamic Templates and Re-Registration Techniques
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Jing Xiao, Takeo Kanade, Jeffrey F. Cohn
Issue Date:May 2002
pp. 0163
This paper presents a method to recover the full-motion (3 rotations and 3 translations) of the head using a cylindrical model. The robustness of the approach is achieved by a combination of three techniques. First, we use the iteratively re-weighted least...
 
Learning 3D Appearance Models from Video
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Fernando De la Torre, Jordi Casoliva, Jeffrey F. Cohn
Issue Date:May 2004
pp. 645
Within the past few years, there has been a great interest in face modeling for analysis (e.g. facial expression recognition) and synthesis (e.g. virtual avatars). Two primary approaches are appearance models (AM) and structure from motion (SFM). While ext...
 
DISFA: A Spontaneous Facial Action Intensity Database
Found in: IEEE Transactions on Affective Computing
By S. Mohammad Mavadati,Mohammad H. Mahoor,Kevin Bartlett,Philip Trinh,Jeffrey F. Cohn
Issue Date:April 2013
pp. 151-160
Access to well-labeled recordings of facial expression is critical to progress in automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, inte...
 
Detecting Depression Severity from Vocal Prosody
Found in: IEEE Transactions on Affective Computing
By Ying Yang,Catherine Fairbairn,Jeffrey F. Cohn
Issue Date:April 2013
pp. 142-150
To investigate the relation between vocal prosody and change in depression severity over time, 57 participants from a clinical trial for treatment of depression were evaluated at seven-week intervals using a semistructured clinical interview for depression...
 
Recognizing Action Units for Facial Expression Analysis
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Ying-li Tian, Takeo Kanade, Jeffrey F. Cohn
Issue Date:February 2001
pp. 97-115
<p><b>Abstract</b>—Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human...
 
Action unit detection with segment-based SVMs
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Tomas Simon, Minh Hoai Nguyen, Fernando De La Torre, Jeffrey F. Cohn
Issue Date:June 2010
pp. 2737-2744
Automatic facial action unit (AU) detection from video is a long-standing problem in computer vision. Two main approaches have been pursued: (1) static modeling — typically posed as a discriminative classification problem in which each video frame is evalu...
 
Relative Body Parts Movement for Automatic Depression Analysis
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Jyoti Joshi,Abhinav Dhall,Roland Goecke,Jeffrey F. Cohn
Issue Date:September 2013
pp. 492-497
In this paper, a human body part motion analysis based approach is proposed for depression analysis. Depression is a serious psychological disorder. The absence of an (automated) objective diagnostic aid for depression leads to a range of subjective biases...
 
Registration Invariant Representations for Expression Detection
Found in: Digital Image Computing: Techniques and Applications
By Patrick Lucey, Simon Lucey, Jeffrey F. Cohn
Issue Date:December 2010
pp. 255-261
Active appearance model (AAM) representations have been used to great effect recently in the accurate detection of expression events (e.g., action units, pain, broad expressions, etc.). The motivation for their use, and rationale for their success, lies in...
 
Enforcing convexity for improved alignment with constrained local models
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Yang Wang, Simon Lucey, Jeffrey F. Cohn
Issue Date:June 2008
pp. 1-8
Constrained local models (CLMs) have recently demonstrated good performance in non-rigid object alignment/ tracking in comparison to leading holistic approaches (e.g., AAMs). A major problem hindering the development of CLMs further, for non-rigid object a...
 
Meticulously Detailed Eye Region Model and Its Application to Analysis of Facial Images
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Tsuyoshi Moriyama, Takeo Kanade, Jing Xiao, Jeffrey F. Cohn
Issue Date:May 2006
pp. 738-752
We propose a system that is capable of detailed analysis of eye region images in terms of the position of the iris, degree of eyelid opening, and the shape, complexity, and texture of the eyelids. The system uses a generative eye region model that paramete...
 
Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Jeffrey F. Cohn, Lawrence Ian Reed, Tsuyoshi Moriyama, Jing Xiao, Karen Schmidt, Zara Ambadar
Issue Date:May 2004
pp. 129
Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed an automatic facial expression analysis system that quantifies the timing of ...
 
Automatic Recognition of Eye Blinking in Spontaneously Occurring Behavior
Found in: Pattern Recognition, International Conference on
By Tsuyoshi Moriyama, Takeo Kanade, Jeffrey F. Cohn, Jing Xiao, Zara Ambadar, Jiang Gao, Hiroki Imamura
Issue Date:August 2002
pp. 40078
Previous research in automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head mot...
 
Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Ying-li Tian, Takeo Kanade, Jeffrey F. Cohn
Issue Date:May 2002
pp. 0229
Previous work suggests that Gabor-wavelet-based methods can achieve high sensitivity and specificity for emotion-specified expressions (e.g., happy, sad) and single action units (AUs) of the Facial Action Coding System (FACS). This paper evaluates a Gabor-...
 
Recognizing Upper Face Action Units for Facial Expression Analysis
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Ying-li Tian, Takeo Kanade, Jeffrey F. Cohn
Issue Date:June 2000
pp. 1294
We develop an automatic system to analyze subtle changes in upper face expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal image sequence. Our system recog...
 
Comprehensive Database for Facial Expression Analysis
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Takeo Kanade, Yingli Tian, Jeffrey F. Cohn
Issue Date:March 2000
pp. 46
Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the p...
 
Dual-State Parametric Eye Tracking
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Ying-li Tian, Takeo Kanade, Jeffrey F. Cohn
Issue Date:March 2000
pp. 110
Most eye trackers work well for open eyes. However, blinking is a physiological necessity for humans. Moreover, for applications such as facial expression analysis and driver awareness systems, we need to do more than tracking the locations of the person's...
 
Interpersonal Coordination of HeadMotion in Distressed Couples
Found in: IEEE Transactions on Affective Computing
By Zakia Hammal,Jeffrey F. Cohn,David Ted George
Issue Date:April 2014
pp. 1-1
In automatic emotional expression analysis, head motion has been considered mostly a nuisance variable, something to control when extracting features for action unit or expression detection. As an initial step toward understanding the contribution of head ...
 
A lp-norm MTMKL framework for simultaneous detection of multiple facial action units
Found in: 2014 IEEE Winter Conference on Applications of Computer Vision (WACV)
By Xiao Zhang,Mohammad H. Mahoor,S. Mohammad Mavadati,Jeffrey F. Cohn
Issue Date:March 2014
pp. 1104-1111
Facial action unit (AU) detection is a challenging topic in computer vision and pattern recognition. Most existing approaches design classifiers to detect AUs individually or AU combinations without considering the intrinsic relations among AUs. This paper...
   
A high-resolution spontaneous 3D dynamic facial expression database
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Xing Zhang,Lijun Yin,Jeffrey F. Cohn,Shaun Canavan,Michael Reale,Andy Horowitz, Peng Liu
Issue Date:April 2013
pp. 1-6
Facial expression is central to human experience. Its efficient and valid measurement is a challenge that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavio...
   
A comparison of alternative classifiers for detecting occurrence and intensity in spontaneous facial expression of infants with their mothers
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Nazanin Zaker,Mohammad H. Mahoor,Whitney I. Mattson,Daniel S. Messinger,Jeffrey F. Cohn
Issue Date:April 2013
pp. 1-6
To model the dynamics of social interaction, it is necessary both to detect specific Action Units (AUs) and variation in their intensity and coordination over time. An automated method that performs well when detecting occurrence may or may not perform wel...
   
The temporal connection between smiles and blinks
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Laura C. Trutoiu,Jessica K. Hodgins,Jeffrey F. Cohn
Issue Date:April 2013
pp. 1-6
In this paper, we present evidence for a temporal relationship between eye blinks and smile dynamics (smile onset and offset). Smiles and blinks occur with high frequency during social interaction, yet little is known about their temporal integration. To e...
   
Temporal coordination of head motion in couples with history of interpersonal violence
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Zakia Hammal,Teresa E. Bailie,Jeffrey F. Cohn,David T. George,Jason Saraghi,Jesus Nuevo Chiquero,Simon Lucey
Issue Date:April 2013
pp. 1-8
Previous research in automated expression analysis has focused on discrete actions with little attention to their timing either within or between persons. We investigated the interpersonal coordination of rigid head motion in 11 intimate couples with a his...
   
Spontaneous vs. posed facial behavior: automatic analysis of brow actions
Found in: Proceedings of the 8th international conference on Multimodal interfaces (ICMI '06)
By Jeffrey F. Cohn, Maja Pantic, Michel F. Valstar, Zara Ambadar
Issue Date:November 2006
pp. 162-170
Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expr...
     
Beyond group differences: specificity of nonverbal behavior and interpersonal communication to depression severity
Found in: Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge (AVEC '13)
By Jeffrey F. Cohn
Issue Date:October 2013
pp. 1-2
Depression is one of the most prevalent mental health disorders and a leading cause of disability worldwide. AVEC 2013 heralds the first systematic effort to detect presence of depression from nonverbal behavior. This keynote addresses three related issues...
     
Automatic detection of pain intensity
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Jeffrey F. Cohn, Zakia Hammal
Issue Date:October 2012
pp. 47-52
Previous efforts suggest that occurrence of pain can be detected from the face. Can intensity of pain be detected as well? The Prkachin and Solomon Pain Intensity (PSPI) metric was used to classify four levels of pain intensity (none, trace, weak, and stro...
     
Social signal processing in depression
Found in: Proceedings of the 2nd international workshop on Social signal processing (SSPW '10)
By Jeffrey F. Cohn
Issue Date:October 2010
pp. 1-2
As social signal processing develops as a field of enquiry and application, there is emerging focus on individual differences in social signaling. My colleagues and I have been particularly interested in social signal processing in depression. Depression h...
     
Foundations of human computing: facial expression and emotion
Found in: Proceedings of the 8th international conference on Multimodal interfaces (ICMI '06)
By Jeffrey F. Cohn
Issue Date:November 2006
pp. 233-238
Many people believe that emotions and subjective feelings are one and the same and that a goal of human-centered computing is emotion recognition. The first belief is outdated; the second mistaken. For human-centered computing to succeed, a different way o...
     
Affective multimodal human-computer interaction
Found in: Proceedings of the 13th annual ACM international conference on Multimedia (MULTIMEDIA '05)
By Jeffrey F. Cohn, Maja Pantic, Nicu Sebe, Thomas Huang
Issue Date:November 2005
pp. 669-676
Social and emotional intelligence are aspects of human intelligence that have been argued to be better predictors than IQ for measuring aspects of success in life, especially in social interactions, learning, and adapting to what is important. When it come...
     
Bimodal expression of emotion by face and voice
Found in: Proceedings of the sixth ACM international conference on Multimedia: Face/gesture recognition and their applications (MULTIMEDIA '98)
By Gary S. Katz, Jeffrey F. Cohn
Issue Date:September 1998
pp. 41-44
This paper addresses the problem of how to automatically generate visual representations of recorded histories of distributed multimedia collaborations. The work reported here focuses mainly on what we consider to be an innovative approach to this problem,...
     
 1