Issue No. 02 - July-December (2010 vol. 1)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/T-AFFC.2010.8
Björn Schuller , Technische Universität München, München
Bogdan Vlasenko , Otto-von-Guericker Universität (OVGU), Magdeburg
Florian Eyben , Technische Universität München, München
Martin Wöllmer , Technische Universität München, München
André Stuhlsatz , University of Applied Sciences Düsseldorf, Düsseldorf
Andreas Wendemuth , Otto-von-Guericker Universität (OVGU), Magdeburg
Gerhard Rigoll , Technische Universität München, München
As the recognition of emotion from speech has matured to a degree where it becomes applicable in real-life settings, it is time for a realistic view on obtainable performances. Most studies tend to overestimation in this respect: Acted data is often used rather than spontaneous data, results are reported on preselected prototypical data, and true speaker disjunctive partitioning is still less common than simple cross-validation. Even speaker disjunctive evaluation can give only a little insight into the generalization ability of today's emotion recognition engines since training and test data used for system development usually tend to be similar as far as recording conditions, noise overlay, language, and types of emotions are concerned. A considerably more realistic impression can be gathered by interset evaluation: We therefore show results employing six standard databases in a cross-corpora evaluation experiment which could also be helpful for learning about chances to add resources for training and overcoming the typical sparseness in the field. To better cope with the observed high variances, different types of normalization are investigated. 1.8 k individual evaluations in total indicate the crucial performance inferiority of inter to intracorpus testing.
Affective computing, speech emotion recognition, cross-corpus evaluation, normalization
G. Rigoll et al., "Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies," in IEEE Transactions on Affective Computing, vol. 1, no. , pp. 119-131, 2010.