Search For:

Displaying 1-9 out of 9 total
Iterative Feature Normalization Scheme for Automatic Emotion Detection from Speech
Found in: IEEE Transactions on Affective Computing
By Carlos Busso,Soroosh Mariooryad,Angeliki Metallinou,Shrikanth Narayanan
Issue Date:October 2013
pp. 386-397
The externalization of emotion is intrinsically speaker-dependent. A robust emotion recognition system should be able to compensate for these differences across speakers. A natural approach is to normalize the features before training the classifiers. Howe...
 
Analysis and Compensation of the Reaction Lag of Evaluators in Continuous Emotional Annotations
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Soroosh Mariooryad,Carlos Busso
Issue Date:September 2013
pp. 85-90
Defining useful emotional descriptors to characterize expressive behaviors is an important research area in affective computing. Recent studies have shown the benefits of using continuous emotional evaluations to annotate spontaneous corpora. Instead of as...
 
Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition
Found in: IEEE Transactions on Affective Computing
By Soroosh Mariooryad,Carlos Busso
Issue Date:April 2013
pp. 183-196
Psycholinguistic studies on human communication have shown that during human interaction individuals tend to adapt their behaviors mimicking the spoken style, gestures, and expressions of their conversational partners. This synchronization pattern is refer...
 
Analysis of driver behaviors during common tasks using frontal video camera and CAN-Bus information
Found in: Multimedia and Expo, IEEE International Conference on
By Jinesh J Jain,Carlos Busso
Issue Date:July 2011
pp. 1-6
Even a small distraction in drivers can lead to life-threatening accidents that affect the life of many. Monitoring distraction is a key aspect of any feedback system intended to keep the driver attention. Toward this goal, this paper studies the behaviors...
 
Correcting Time-Continuous Emotional Labels by Modeling the Reaction Lag of Evaluators
Found in: IEEE Transactions on Affective Computing
By Soroosh Mariooryad,Carlos Busso
Issue Date:July 2014
pp. 1
An appealing scheme to characterize expressive behaviors is the use of emotional dimensions such as activation (calm versus active) and valence (negative versus positive). These descriptors offer many advantages to describe the wide spectrum of emotions. D...
 
Feature and model level compensation of lexical content for facial emotion recognition
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Soroosh Mariooryad,Carlos Busso
Issue Date:April 2013
pp. 1-6
Along with emotions, modulation of the lexical content is an integral aspect of spontaneously produced facial expressions. Hence, the verbal content introduces an undesired variability for solving the facial emotion recognition problem, especially in conti...
   
Evaluating the robustness of an appearance-based gaze estimation method for multimodal interfaces
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Nanxiang Li, Carlos Busso
Issue Date:December 2013
pp. 91-98
Given the crucial role of eye movements on visual attention, tracking gaze behaviors is an important research problem in various applications including biometric identification, attention modeling and human-computer interaction. Most of the existing gaze t...
     
Analysis of emotion recognition using facial expressions, speech and multimodal information
Found in: Proceedings of the 6th international conference on Multimodal interfaces (ICMI '04)
By Abe Kazemzadeh, Carlos Busso, Chul Min Lee, Murtaza Bulut, Serdar Yildirim, Shrikanth Narayanan, Sungbok Lee, Ulrich Neumann, Zhigang Deng
Issue Date:October 2004
pp. 205-211
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on fa...
     
Audio-based head motion synthesis for Avatar-based telepresence systems
Found in: Proceedings of the 2004 ACM SIGMM workshop on Effective telepresence (ETP '04)
By Carlos Busso, Shri Narayanan, Ulrich Neumann, Zhigang Deng
Issue Date:October 2004
pp. 24-30
In this paper, a data-driven audio-based head motion synthesis technique is presented for avatar-based telepresence systems. First, head motion of a human subject speaking a custom corpus is captured, and the accompanying audio features are extracted. Base...
     
 1