The Community for Technology Leaders
Green Image
Issue No. 02 - April-June (2015 vol. 6)
ISSN: 1949-3045
pp: 97-108
Soroosh Mariooryad , Erik Jonsson School of Engineering & Computer Science, The University of Texas at Dallas, TX
Carlos Busso , Erik Jonsson School of Engineering & Computer Science, The University of Texas at Dallas, TX
An appealing scheme to characterize expressive behaviors is the use of emotional dimensions such as activation (calm versus active) and valence (negative versus positive). These descriptors offer many advantages to describe the wide spectrum of emotions. Due to the continuous nature of fast-changing expressive vocal and gestural behaviors, it is desirable to continuously track these emotional traces, capturing subtle and localized events (e.g., with FEELTRACE). However, time-continuous annotations introduce challenges that affect the reliability of the labels. In particular, an important issue is the evaluators’ reaction lag caused by observing, appraising, and responding to the expressive behaviors. An empirical analysis demonstrates that this delay varies from 1 to 6 seconds, depending on the annotator, expressive dimension, and actual behaviors. Our experiments show accuracy improvements even with fixed delays (1-3 seconds). This paper proposes to compensate for this reaction lag by finding the time-shift that maximizes the mutual information between the expressive behaviors and the time-continuous annotations. The approach is implemented by making different assumptions about the evaluators’ reaction lag. The benefits of compensating for the delay is demonstrated with emotion classification experiments. On average, the classifiers trained with facial and speech features show more than 7 percent relative improvements over baseline classifiers trained and tested without shifting the time-continuous annotations.
Delays, Gold, Mutual information, Feature extraction, Databases, Emotion recognition, Acoustics,
Soroosh Mariooryad, Carlos Busso, "Correcting Time-Continuous Emotional Labels by Modeling the Reaction Lag of Evaluators", IEEE Transactions on Affective Computing, vol. 6, no. , pp. 97-108, April-June 2015, doi:10.1109/TAFFC.2014.2334294
92 ms
(Ver 3.3 (11022016))