The Community for Technology Leaders
Green Image
ABSTRACT
An appealing scheme to characterize expressive behaviors is the use of emotional dimensions such as activation (calm versus active) and valence (negative versus positive). These descriptors offer many advantages to describe the wide spectrum of emotions. Due to the continuous nature of fast-changing expressive vocal and gestural behaviors, it is desirable to continuously track these emotional traces, capturing subtle and localized events (e.g., with FEELTRACE). However, time-continuous annotations introduce challenges that affect the reliability of the labels. In particular, an important issue is the evaluators' reaction lag caused by observing, appraising, and responding to the expressive behaviors. An empirical analysis demonstrates that this delay varies from 1 to 6 seconds, depending on the annotator, expressive dimension, and actual behaviors. Our experiments show accuracy improvements even with fixed delays (1-3 seconds). This paper proposes to compensate for this reaction lag by finding the time-shift that maximizes the mutual information between the expressive behaviors and the time-continuous annotations. The approach is implemented by making different assumptions about the evaluators' reaction lag. The benefits of compensating for the delay is demonstrated with emotion classification experiments. On average, the classifiers trained with facial and speech features show more than 7 percent relative improvements over baseline classifiers trained and tested without shifting the time-continuous annotations.
INDEX TERMS
emotion recognition, image classification, psychology,time-continuous annotation, evaluator reaction lag modelling, emotional dimension, expressive behavior, time shift, delay compensation, emotion classification,Delays, Gold, Mutual information, Feature extraction, Databases, Emotion recognition, Acoustics,Time-continuous emotion annotation, emotion recognition, emotional descriptors, maximum mutual information,Time-continuous emotion annotation, emotion recognition, emotional descriptors, maximum mutual information
CITATION
"Correcting Time-Continuous Emotional Labels by Modeling the Reaction Lag of Evaluators", IEEE Transactions on Affective Computing, vol. 6, no. , pp. 97-108, April-June 2015, doi:10.1109/TAFFC.2014.2334294
83 ms
(Ver 3.3 (11022016))