The Community for Technology Leaders
Green Image
Issue No. 03 - July-September (2011 vol. 2)
ISSN: 1949-3045
pp: 162-174
Sona Patel , University of Geneva, Geneva
Klaus R. Scherer , University of Geneva, Geneva
Johan Sundberg , KTH Royal Institute of Technology, Stockholm
Eva Björkner , KTH Royal Institute of Technology, Stockholm
Emotions have strong effects on the voice production mechanisms and consequently on voice characteristics. The magnitude of these effects, measured using voice source parameters, and the interdependencies among parameters have not been examined. To better understand these relationships, voice characteristics were analyzed in 10 actors' productions of a sustained/a/vowel in five emotions. Twelve acoustic parameters were studied and grouped according to their physiological backgrounds, three related to subglottal pressure, five related to the transglottal airflow waveform derived from inverse filtering the audio signal, and four related to vocal fold vibration. Each emotion appeared to possess a specific combination of acoustic parameters reflecting a specific mixture of physiologic voice control parameters. Features related to subglottal pressure showed strong within-group and between-group correlations, demonstrating the importance of accounting for vocal loudness in voice analyses. Multiple discriminant analysis revealed that a parameter selection that was based, in a principled fashion, on production processes could yield rather satisfactory discrimination outcomes (87.1 percent based on 12 parameters and 78 percent based on three parameters). The results of this study suggest that systems to automatically detect emotions use a hypothesis-driven approach to selecting parameters that directly reflect the physiological parameters underlying voice and speech production.
Paralanguage analysis, affect sensing and analysis, affective computing, voice source, vocal physiology.
Sona Patel, Klaus R. Scherer, Johan Sundberg, Eva Björkner, "Interdependencies among Voice Source Parameters in Emotional Speech", IEEE Transactions on Affective Computing, vol. 2, no. , pp. 162-174, July-September 2011, doi:10.1109/T-AFFC.2011.14
102 ms
(Ver 3.1 (10032016))