The Community for Technology Leaders
2015 International Conference on Big Data and Smart Computing (BigComp) (2015)
Jeju, South Korea
Feb. 9, 2015 to Feb. 11, 2015
ISBN: 978-1-4799-7303-3
pp: 39-42
Hyo Jin Do , Dept. of Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Ho-Jin Choi , Dept. of Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
ABSTRACT
What emotion do we feel when we see a situation? Multimodal sentiment analysis has been used to answer this question, but most of the research considers only low-level perceptual information such as textual, acoustic, and visual features. However, these features are not appropriate for the classification of situations as it is difficult to depict real-life complexities with low-level features. In this paper, we propose an emotion prediction framework which identifies polarity of emotion in situations using high-level contextual information, namely, location, people and time. Before predicting emotions, the framework structures data into ‘situation’ segments and labels each segment based on our carefully designed annotation guideline. Our approach is tested with various situations in TV sitcoms as a substitute for real-life situations. Experimental results indicate that contextual information is more effective than textual or acoustic features in determining emotions induced by situations.
INDEX TERMS
Feature extraction, Guidelines, Sentiment analysis, TV, Acoustics, Visualization, Context
CITATION

H. J. Do and H. Choi, "Sentiment analysis of real-life situations using location, people and time as contextual features," 2015 International Conference on Big Data and Smart Computing (BigComp)(BIGCOMP), Jeju, South Korea, 2015, pp. 39-42.
doi:10.1109/35021BIGCOMP.2015.7072847
94 ms
(Ver 3.3 (11022016))