CLOSED Call for Papers: Special Issue on Multi-Modal Affective Computing of Large-Scale Multimedia Data

Share this on:
Submissions Due: 16 December 2020

With the rapid development of digital photography and social networks, people have gotten used to sharing their lives and expressing their opinions online. As a result, user-generated social media data, including text, images, audios, and videos, are growing rapidly, which urgently demands advanced techniques on the management, retrieval, and understanding of these data. Most of the existing works on multimedia analysis focus on cognitive content understanding, such as scene understanding, object detection, and recognition. Recently, with a significant demand for emotion representation in artificial intelligence, multimedia affective analysis has attracted increasing research efforts from both academic and industrial research communities. Affective computing of user-generated large-scale multimedia data is rather challenging due to the following reasons. As emotion is a subjective concept, affective analysis involves multidisciplinary understanding of human perceptions and behaviors. Furthermore, emotions are often jointly expressed and perceived through multiple modalities. Multi-modal data fusion and complementation need to be explored. Recent solutions based on deep learning require large-scale data with fine labeling. The development of affective analysis is constrained by the affective gap between low-level affective features and high-level emotions, and the subjectivity of emotion perceptions among different viewers with the influence of social, educational, and cultural factors. Recently, great advancements in machine learning and artificial intelligence have made large-scale affective computing of multimedia possible.

This special issue of IEEE MultiMedia aims to gather high-quality contributions reporting the most recent progress on multi-modal affective computing of large-scale multimedia data and its wide applications. It targets a mixed audience of researchers and product developers from several communities: multimedia, machine learning, psychology, artificial intelligence, etc. The topics of interest include, but are not limited to:

  • Affective content understanding of uni-modal text, images, facial expressions, and speech
  • Emotion recognition from multi-modal physiological signals
  • Emotion-based multi-modal summarization of social events
  • Affective tagging, indexing, retrieval, and recommendation of social media
  • Human-centered emotion perception prediction in social networks
  • Group emotion clustering, personality inference, and emotional region detection
  • Psychological perspectives on affective content analysis
  • Weakly supervised/unsupervised/self-supervised learning for affective computing
  • Deep learning and reinforcement learning for affective computing
  • Domain adaptation and generalization for affective computing
  • Fusion methods for multi-modal emotion recognition
  • Benchmark dataset and performance evaluation
  • Overviews and surveys on affective computing
  • Affective computing-based applications in entertainment, robotics, education, etc.

Important Dates

Submissions due: December 16, 2020
First notification: February 17, 2021
Revision submission: March 24, 2021
Notification of acceptance: April 28, 2021
Publication: April-June 2021

Guest Editors

Contact the guest editors at mm2-21@computer.org.

  • Dr. Sicheng Zhao, University of California Berkeley, US
  • Prof. Min Xu, University of Technology Sydney, Australia
  • Prof. Qingming Huang, Chinese Academy of Sciences, China
  • Prof. Björn W. Schuller, Imperial College London, UK