Search For:

Displaying 1-24 out of 24 total
A multitask approach to continuous five-dimensional affect sensing in natural speech
Found in: ACM Transactions on Interactive Intelligent Systems (TiiS)
By Florian Eyben, Martin Wollmer, Martin Wollmer, Bjorn Schuller, Bjorn Schuller, Florian Eyben
Issue Date:March 2012
pp. 1-29
Automatic affect recognition is important for the ability of future technical systems to interact with us socially in an intelligent way by understanding our current affective state. In recent years there has been a shift in the field of affect recognition...
     
Sparse Autoencoder-Based Feature Transfer Learning for Speech Emotion Recognition
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Jun Deng, Zixing Zhang,Erik Marchi,Bjorn Schuller
Issue Date:September 2013
pp. 511-516
In speech emotion recognition, training and test data used for system development usually tend to fit each other perfectly, but further 'similar' data may be available. Transfer learning helps to exploit such similar data for training despite the inherent ...
 
Statistical Approaches to Concept-Level Sentiment Analysis
Found in: IEEE Intelligent Systems
By Erik Cambria,Bjorn Schuller,Bing Liu,Haixun Wang,Catherine Havasi
Issue Date:May 2013
pp. 6-9
The guest editors introduce novel statistical approaches to concept-level sentiment analysis that go beyond a mere syntactic-driven analysis of text and provide semantic-based methods. Such approaches allow a more efficient passage from (unstructured) text...
   
YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context
Found in: IEEE Intelligent Systems
By Martin Wollmer,Felix Weninger,Tobias Knaup,Bjorn Schuller,Congkai Sun,Kenji Sagae,Louis-Philippe Morency
Issue Date:May 2013
pp. 46-53
This work focuses on automatically analyzing a speaker's sentiment in online videos containing movie reviews. In addition to textual information, this approach considers adding audio features as typically used in speech-based emotion recognition as well as...
 
Knowledge-Based Approaches to Concept-Level Sentiment Analysis
Found in: IEEE Intelligent Systems
By Erik Cambria,Bjorn Schuller,Bing Liu,Haixun Wang,Catherine Havasi
Issue Date:March 2013
pp. 12-14
The guest editors introduce novel approaches to opinion mining and sentiment analysis that go beyond a mere word-level analysis of text and provide concept-level methods. Such approaches allow a more efficient passage from (unstructured) textual informatio...
 
New Avenues in Opinion Mining and Sentiment Analysis
Found in: IEEE Intelligent Systems
By Erik Cambria,Bjorn Schuller,Yunqing Xia,Catherine Havasi
Issue Date:March 2013
pp. 15-21
The distillation of knowledge from the Web—also known as opinion mining and sentiment analysis—is a task that has recently raised growing interest for purposes such as customer service, predicting financial markets, monitoring p...
 
Guest Editorial: Special Section on Naturalistic Affect Resources for System Building and Evaluation
Found in: IEEE Transactions on Affective Computing
By Bjorn Schuller,Ellen Douglas-Cowie,Anton Batliner
Issue Date:January 2012
pp. 3-4
The papers in this special section focus on the deployment of naturalistic affect resources for systems design and analysis.
 
Recognizing Affect from Linguistic Information in 3D Continuous Space
Found in: IEEE Transactions on Affective Computing
By Björn Schuller
Issue Date:October 2011
pp. 192-205
Most research efforts dealing with recognition of emotion-related states from the human speech signal concentrate on acoustic analysis. However, the last decade's research results show that the task cannot be solved to complete satisfaction, especially whe...
 
Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies
Found in: IEEE Transactions on Affective Computing
By Björn Schuller, Bogdan Vlasenko, Florian Eyben, Martin Wöllmer, André Stuhlsatz, Andreas Wendemuth, Gerhard Rigoll
Issue Date:July 2010
pp. 119-131
As the recognition of emotion from speech has matured to a degree where it becomes applicable in real-life settings, it is time for a realistic view on obtainable performances. Most studies tend to overestimation in this respect: Acted data is often used r...
 
GMs in On-Line Handwritten Whiteboard Note Recognition: The Influence of Implementation and Modeling
Found in: Document Analysis and Recognition, International Conference on
By Björn Schuller, Joachim Schenk, Gerhard Rigoll, Tobias Knaup
Issue Date:July 2009
pp. 858-862
In the fields of sentiment and emotion recognition, bag of words modeling has lately become popular for the estimation of valence in text. A typical application is the evaluation of reviews of e. g. movies, music, or games. In this respect we suggest the u...
 
GMs in On-Line Handwritten Whiteboard Note Recognition: The Influence of Implementation and Modeling
Found in: Document Analysis and Recognition, International Conference on
By Joachim Schenk, Benedikt Hörnler, Björn Schuller, Artur Braun, Gerhard Rigoll
Issue Date:July 2009
pp. 877-880
We present a comparison of two state-of-the-art toolboxes for implementing Graphical Models (GMs), namely the HTK and the GMTK, and their use for discrete on-line handwritten whiteboard note recognition. We then motivate a GM that is capable of modeling th...
 
Robust discriminative keyword spotting for emotionally colored spontaneous speech using bidirectional LSTM networks
Found in: Acoustics, Speech, and Signal Processing, IEEE International Conference on
By Martin Wollmer, Florian Eyben, Joseph Keshet, Alex Graves, Bjorn Schuller, Gerhard Rigoll
Issue Date:April 2009
pp. 3949-3952
In this paper we propose a new technique for robust keyword spotting that uses bidirectional Long Short-Term Memory (BLSTM) recurrent neural nets to incorporate contextual information in speech decoding. Our approach overcomes the drawbacks of generative H...
 
Emotion recognition from speech: Putting ASR in the loop
Found in: Acoustics, Speech, and Signal Processing, IEEE International Conference on
By Bjorn Schuller, Anton Batliner, Stefan Steidl, Dino Seppi
Issue Date:April 2009
pp. 4585-4588
This paper investigates the automatic recognition of emotion from spoken words by vector space modeling vs. string kernels which have not been investigated in this respect, yet. Apart from the spoken content directly, we integrate Part-of-Speech and higher...
 
Segmentation and Recognition of Meeting Events using a Two-Layered HMM and a Combined MLP-HMM Approach
Found in: Multimedia and Expo, IEEE International Conference on
By Stephan Reiter, Bjorn Schuller, Gerhard Rigoll
Issue Date:July 2006
pp. 953-956
Automatic segmentation and classification of recorded meetings provides a basis that enables effective browsing and querying in a meeting archive. Yet, robustness of today's approaches is often not reliable enough. We therefore strive to improve on this ta...
 
Efficient Recognition of Authentic Dynamic Facial Expressions on the Feedtum Database
Found in: Multimedia and Expo, IEEE International Conference on
By Frank Wallhoff, Bjorn Schuller, Michael Hawellek, Gerhard Rigoll
Issue Date:July 2006
pp. 493-496
In order to allow for fast recognition of a user's affective state we discuss innovative holistic and self organizing approaches for efficient facial expression analysis. The feature set is thereby formed by global descriptors and MPEG based DCT coefficien...
 
A Two-Layer Graphical Model for Combined Video Shot and Scene Boundary Detection
Found in: Multimedia and Expo, IEEE International Conference on
By Marc Al-Hames, Stefan Zettl, Frank Wallhoff, Stephan Reiter, Bjorn Schuller, Gerhard Rigoll
Issue Date:July 2006
pp. 261-264
In this work we present a novel two-layer hybrid Graphical model for combined shot and scene boundary detection in videos. In the first layer of the model, low-level features are used to detect shot boundaries. The shot layer is connected to a higher layer...
 
Musical Signal Type Discrimination based on Large Open Feature Sets
Found in: Multimedia and Expo, IEEE International Conference on
By Bjorn Schuller, Frank Wallhoff, Dejan Arsic, Gerhard Rigoll
Issue Date:July 2006
pp. 1089-1092
Automatic discrimination of musical signal types as speech, singing, music, genres or drumbeats within audio streams is of great importance e. g. for radio broadcast stream segmentation. Yet, feature sets are largely discussed. We therefore suggest a large...
 
Evolutionary Feature Generation in Speech Emotion Recognition
Found in: Multimedia and Expo, IEEE International Conference on
By Bjorn Schuller, Stephan Reiter, Gerhard Rigoll
Issue Date:July 2006
pp. 5-8
Feature sets are broadly discussed within speech emotion recognition by acoustic analysis. While popular filter and wrapper based search help to retrieve relevant ones, we feel that automatic generation of such allows for more flexibility throughout search...
 
Distributing Recognition in Computational Paralinguistics
Found in: IEEE Transactions on Affective Computing
By Zixing Zhang,Eduardo Coutinho,Jun Deng,Bjorn Schuller
Issue Date:February 2015
pp. 1
In this paper, we propose and evaluate a distributed system for multiple Computational Paralinguistics tasks in a clientserver architecture. The client side deals with feature extraction, compression and bit-stream formatting, while the server side perform...
 
Tandem decoding of children's speech for keyword detection in a child-robot interaction scenario
Found in: ACM Transactions on Speech and Language Processing (TSLP)
By Anton Batliner, Bjorn Schuller, Dino Seppi, Martin Wollmer, Stefan Steidl
Issue Date:August 2011
pp. 1-22
In this article, we focus on keyword detection in children's speech as it is needed in voice command systems. We use the FAU Aibo Emotion Corpus which contains emotionally colored spontaneous children's speech recorded in a child-robot interaction scenario...
     
3d gesture recognition applying long short-term memory and contextual knowledge in a CAVE
Found in: Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis (MPVA '10)
By Bjorn Schuller, Dejan Arsicc, Florian Eyben, Gerhard Rigoll, Luis Roalter, Martin Wollmer, Matthias Kranz, Moritz Kaiser
Issue Date:October 2010
pp. 33-36
Virtual reality applications are emerging into various regions of research and entertainment. Although visual and acoustic capabilities are already quite impressive, a wide range of users still criticizes the user interface. Frequently complex and very sen...
     
Opensmile: the munich versatile and fast open-source audio feature extractor
Found in: Proceedings of the international conference on Multimedia (MM '10)
By Bjorn Schuller, Florian Eyben, Martin Wollmer
Issue Date:October 2010
pp. 1459-1462
We introduce the openSMILE feature extraction toolkit, which unites feature extraction algorithms from the speech processing and the Music Information Retrieval communities. Audio low-level descriptors such as CHROMA and CENS features, loudness, Mel-freque...
     
Experimental evaluation of user errors at the skill-based level in an automative environment
Found in: CHI '02 extended abstracts on Human factors in computer systems (CHI '02)
By Bjorn Schuller, Frank Althoff, Gregor McGlaun, Karla Geiss, Manfred Lang
Issue Date:April 2002
pp. 782-783
Concentrating on the lowest performance level of Reason's error model, in this work we evaluated the potential of user errors in an automative environment. Thereby the test subjects had to operate various in-car devices while primarily fulfilling a simulat...
     
A new technique for adjusting distraction moments in multitasking non-field usability tests
Found in: CHI '02 extended abstracts on Human factors in computer systems (CHI '02)
By Bjorn Schuller, Frank Althoff, Gregor McGlaun, Manfred Lang
Issue Date:April 2002
pp. 666-667
Evaluating errors that result from user interactions with in-car applications, it has to be considered that the user is permanently involved with driving the car. Reproducing this driving workload in non-field usability tests, it showed that the driving si...
     
 1