Search For:

Displaying 1-7 out of 7 total
A multitask approach to continuous five-dimensional affect sensing in natural speech
Found in: ACM Transactions on Interactive Intelligent Systems (TiiS)
By Florian Eyben, Martin Wollmer, Martin Wollmer, Bjorn Schuller, Bjorn Schuller, Florian Eyben
Issue Date:March 2012
pp. 1-29
Automatic affect recognition is important for the ability of future technical systems to interact with us socially in an intelligent way by understanding our current affective state. In recent years there has been a shift in the field of affect recognition...
     
Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies
Found in: IEEE Transactions on Affective Computing
By Björn Schuller, Bogdan Vlasenko, Florian Eyben, Martin Wöllmer, André Stuhlsatz, Andreas Wendemuth, Gerhard Rigoll
Issue Date:July 2010
pp. 119-131
As the recognition of emotion from speech has matured to a degree where it becomes applicable in real-life settings, it is time for a realistic view on obtainable performances. Most studies tend to overestimation in this respect: Acted data is often used r...
 
YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context
Found in: IEEE Intelligent Systems
By Martin Wollmer,Felix Weninger,Tobias Knaup,Bjorn Schuller,Congkai Sun,Kenji Sagae,Louis-Philippe Morency
Issue Date:May 2013
pp. 46-53
This work focuses on automatically analyzing a speaker's sentiment in online videos containing movie reviews. In addition to textual information, this approach considers adding audio features as typically used in speech-based emotion recognition as well as...
 
Robust discriminative keyword spotting for emotionally colored spontaneous speech using bidirectional LSTM networks
Found in: Acoustics, Speech, and Signal Processing, IEEE International Conference on
By Martin Wollmer, Florian Eyben, Joseph Keshet, Alex Graves, Bjorn Schuller, Gerhard Rigoll
Issue Date:April 2009
pp. 3949-3952
In this paper we propose a new technique for robust keyword spotting that uses bidirectional Long Short-Term Memory (BLSTM) recurrent neural nets to incorporate contextual information in speech decoding. Our approach overcomes the drawbacks of generative H...
 
Tandem decoding of children's speech for keyword detection in a child-robot interaction scenario
Found in: ACM Transactions on Speech and Language Processing (TSLP)
By Anton Batliner, Bjorn Schuller, Dino Seppi, Martin Wollmer, Stefan Steidl
Issue Date:August 2011
pp. 1-22
In this article, we focus on keyword detection in children's speech as it is needed in voice command systems. We use the FAU Aibo Emotion Corpus which contains emotionally colored spontaneous children's speech recorded in a child-robot interaction scenario...
     
3d gesture recognition applying long short-term memory and contextual knowledge in a CAVE
Found in: Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis (MPVA '10)
By Bjorn Schuller, Dejan Arsicc, Florian Eyben, Gerhard Rigoll, Luis Roalter, Martin Wollmer, Matthias Kranz, Moritz Kaiser
Issue Date:October 2010
pp. 33-36
Virtual reality applications are emerging into various regions of research and entertainment. Although visual and acoustic capabilities are already quite impressive, a wide range of users still criticizes the user interface. Frequently complex and very sen...
     
Opensmile: the munich versatile and fast open-source audio feature extractor
Found in: Proceedings of the international conference on Multimedia (MM '10)
By Bjorn Schuller, Florian Eyben, Martin Wollmer
Issue Date:October 2010
pp. 1459-1462
We introduce the openSMILE feature extraction toolkit, which unites feature extraction algorithms from the speech processing and the Music Information Retrieval communities. Audio low-level descriptors such as CHROMA and CENS features, loudness, Mel-freque...
     
 1