The Community for Technology Leaders
RSS Icon
Issue No.03 - May-June (2013 vol.28)
pp: 46-53
Martin Wollmer , Technische Universität München
Felix Weninger , Technische Universität München
Tobias Knaup , Technische Universität München
Bjorn Schuller , Technische Universität München
Congkai Sun , Shanghai Jiaotong University
Kenji Sagae , University of Southern California
Louis-Philippe Morency , University of Southern California
This work focuses on automatically analyzing a speaker's sentiment in online videos containing movie reviews. In addition to textual information, this approach considers adding audio features as typically used in speech-based emotion recognition as well as video features encoding valuable valence information conveyed by the speaker. Experimental results indicate that training on written movie reviews is a promising alternative to exclusively using (spoken) in-domain data for building a system that analyzes spoken movie review videos, and that language-independent audio-visual analysis can compete with linguistic analysis.
Videos, Motion pictures, Pragmatics, Context awareness, Feature extraction, YouTube, Visualization,linguistic analysis, Videos, Motion pictures, Pragmatics, Context awareness, Feature extraction, YouTube, Visualization, intelligent systems, sentiment analysis, affective computing, audio-visual pattern recognition
Martin Wollmer, Felix Weninger, Tobias Knaup, Bjorn Schuller, Congkai Sun, Kenji Sagae, Louis-Philippe Morency, "YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context", IEEE Intelligent Systems, vol.28, no. 3, pp. 46-53, May-June 2013, doi:10.1109/MIS.2013.34
1. E. Cambria et al., “Sentic Computing for Social Media Marketing,” Multimedia Tools and Applications, vol. 59, no. 2, 2012, pp. 557-577.
2. P. Turney, “Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews,” Proc. 40th Ann. Meeting of the Assoc. Computational Linguistics, ACL, 2002, pp. 417-424.
3. B. Schuller et al., “‘The Godfather’ vs. ‘Chaos’: Comparing Linguistic Analysis Based on Online Knowledge Sources and Bags-of-N-Grams for Movie Review Valence Estimation,” Proc. Int'l Conf. Document Analysis and Recognition, IEEE, 2009, pp. 858-862.
4. M. Wöllmer et al., “LSTM-Modeling of Continuous Emotions in an Audiovisual Affect Recognition Framework,” Image and Vision Computing, vol. 31, no. 1, 2012, pp. 153-163.
5. B. Pang and L. Lee, “A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts,” Proc. 42nd Meeting of the Assoc. Computational Linguistics, ACL, 2004, pp. 271-278.
6. F. Eyben, M. Wöllmer, and B. Schuller, “OpenSMILE—The Munich Versatile and Fast Open-Source Audio Feature Extractor,” Proc. ACM Multimedia, ACM, 2010, pp. 1459-1462.
7. M. Hall, “Correlation-Based Feature Selection for Machine Learning,” doctoral dissertation, Dept. of Computer Science, Univ. of Waikato, 1999.
8. T.B. Dinh, N. Vo, and G. Medioni, “Context Tracker: Exploring Supporters and Distracters in Unconstrained Environments,” Proc. Computer Vision and Pattern Recognition, 2011, pp. 1177-1184.
9. L.P. Morency, J. Whitehill, and J. Movellan, “Generalized Adaptive View-Based Appearance Model: Integrated Framework for Monocular Head Pose Estimation,” Proc. Automatic Face and Gesture Recognition, IEEE, 2008; doi:10.1109/AFGR.2008.4813429.
165 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool