CLOSED Call for Papers: Special Issue on Responsible, Explainable, and Emotional AI

Share this on:
Submissions Due: 6 May 2022

Submission deadline: 6 May 2022
Publication: September/October 2022

Although we are now in the third decade of the 21st century, AI is still looking for vents to become more human-centric or human-alike. On one hand, AI and ML are still struggling to map human emotion decisions and their underlying emotion-cognitive processes that derive from a complex brain network counting several billions of neurons. Such a demystification
process premises the convergence of different research areas and domains like neuroscience, psychology, biology, and sociology. Research in the domain of AI has started crossing the borders between machines and humans. Inevitably, concepts like affective computing (computing that relates to, arises from, or deliberately influences emotions, according to the definition given by Rosaline Picard back in the 90s), and human and emotion AI (requiring that AI and especially ML should deploy human knowledge, derived by socio-emotional and psychology sciences to be more human-alike and should exploit pedagogical paradigms as a form of training ML algorithms) have appeared, leading researchers to exciting new paths, towards the invention of AI that challenges the Turing Test.

On the other hand, the effort to make human and machine intelligence meet and collaborate is also searching for tools and frameworks that can help humans understand and interpret predictions made by ML models, through what is identified as explainable AI. Should the quest for explainable AI succeed, the models of ML or deep learning trained over tons of data will no longer remain data-sourced “black-boxes,” making it impossible even for their creators to explain or in some cases even understand what exactly is happening inside them and how the algorithm came up with a specific result. But the quest is far from finishing. It has been almost a decade that HumanAI has tried to build robust frameworks towards fair (or unbiased), explainable (or interpretable), responsible (or accountable), and transparent ML–known as FATML (Fair, Accountable, Transparent ML)–with the recent efforts to establish the right to explanation for the outputs of the algorithms focusing on meta-cognition processes.

Acknowledging the above exciting research fields and challenges that will introduce new methodologies, frameworks, and paradigms on deploying responsible, explainable, emotional, and ultimately more human AI, this special issue of IT Professional is seeking submissions that will bring ML close to human understanding and human sense. We seek high-quality contributions from industry, government, business, and academia that present recent advances in human/emotion AI. Topics of interest include, but are not limited to, the following:

  • Human AI
  • Affective Computing/Emotion AI
  • FATML (Fair, Accountable, Transparent ML)
  • Interactive ML
  • AI Ethics
  • Bio/Meta Ethics

Submission Guidelines

For author information and guidelines on submission criteria, please visit the IT Pro Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Contact the guest editors at

Guest Editors:

  • Michalis Feidakis
  • Saeid Abolfazli
  • Steve Andriole