The Community for Technology Leaders
Green Image
Issue No. 02 - April-June (2014 vol. 5)
ISSN: 1949-3045
pp: 0
Daniel Bone , Department of Electrical Engineering, University of Southern California, Los Angeles, CA
Chi-Chun Lee , Electrical Engineering Department, National Tsing Hua University, Taiwan.
Shrikanth Narayanan , Department of Electrical Engineering, University of Southern California, Los Angeles, CA
ABSTRACT
Studies in classifying affect from vocal cues have produced exceptional within-corpus results, especially for arousal (activation or stress); yet cross-corpora affect recognition has only recently garnered attention. An essential requirement of many behavioral studies is affect scoring that generalizes across different social contexts and data conditions. We present a robust, unsupervised (rule-based) method for providing a scale-continuous, bounded arousal rating operating on the vocal signal. The method incorporates just three knowledge-inspired features chosen based on empirical and theoretical evidence. It constructs a speaker’s baseline model for each feature separately, and then computes single-feature arousal scores. Lastly, it advantageously fuses the single-feature arousal scores into a final rating without knowledge of the true affect. The baseline data is preferably labeled as neutral, but some initial evidence is provided to suggest that no labeled data is required in certain cases. The proposed method is compared to a state-of-the-art supervised technique which employs a high-dimensional feature set. The proposed framework achieveshighly-competitive performance with additional benefits. The measure is interpretable, scale-continuous as opposed to discrete, and can operate without any affective labeling. An accompanying Matlab tool is made available with the paper.
INDEX TERMS
Robustness, Speech, Feature extraction, Acoustics, Databases, Context, Accuracy
CITATION

D. Bone, C. Lee and S. Narayanan, "Robust Unsupervised Arousal Rating:A Rule-Based Framework withKnowledge-Inspired Vocal Features," in IEEE Transactions on Affective Computing, vol. 5, no. 2, pp. , 2014.
doi:10.1109/TAFFC.2014.2326393
91 ms
(Ver 3.3 (11022016))