• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE-CS_LogoTM-orange
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
IEEE-CS_LogoTM-orange

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2026 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Co
  • Home
  • / ...
  • /Magazines
  • /Co

CLOSED Call for Papers: Special Issue on Explainable AI and Machine Learning

We are observing the rapid increase of artificial intelligence-oriented and machine learning-dependent algorithms and their applications all around us. In addition to everyday applications of speech and image recognition, these algorithms are increasingly used in safety-critical software, such as autonomous driving and robotics. The performance of most artificial intelligence and machine learning (AI/ML) algorithms typically equals or surpasses human performance. However, these AI/ML algorithm-based applications are highly opaque—that is, it is difficult to decipher the reasoning behind a particular classification or decision produced by an AI/ML application.

Although the accuracy level is usually high, AI/ML applications are not foolproof. Deadly accidents in autonomous vehicles are one example of the risks associated with complete reliance on these programs. To accept these applications in our lives, ultimately there must be some responsibility and accountability for the outcome produced by these applications. Knowing how determinations are made by these applications, and being able to justify the AI system's action or decision is essential—particularly to address the following questions in appropriate scenarios.

  • How do we know the system is working correctly?
  • What combinations of factors support the decision?
  • Why was another action not taken?

This information constitutes explainability, which should be an integral part of verification and validation for AI/ML software. For this special issue, Computer seeks articles that describe different approaches and efforts towards AI/ML explainability.

Topics of Interest:

  • Examples of failures due to lack of explainability
  • Performance of learning algorithms
  • Appropriate levels of trust in learning algorithms
  • Approaches to AI/ML explainability
  • Causality and inference in AI/ML applications
  • Human factors in explainability
  • Psychological acceptability of AI/ML systems

Key Dates

  • Articles due for review: December 31, 2020
  • First notification to authors: February 26, 2021
  • Second revisions submission deadline: March 15, 2021
  • Second notification to authors:  April 17, 2021
  • Camera-ready paper deadline: July 01, 2021
  • Publication: October 2021

Submission Guidelines

Manuscripts should not be published or currently submitted for publication elsewhere. For manuscript submission guidelines, visit www.computer.org/publications/author-resources/peer-review/magazines. When you are ready to submit, visit https://mc.manuscriptcentral.com/com-cs.

Questions?

Please contact the guest editors at co10-21@computer.org.

Guest editors:

  • M S Raunak, Loyola University Maryland/NIST
  • Rick Kuhn, NIST

LATEST NEWS
IEEE CS High-Performance Computing Conference SC Recognized as Fastest Growing Event in 2025
IEEE CS High-Performance Computing Conference SC Recognized as Fastest Growing Event in 2025
ASTRA 2025: Neuroimaging, Brain-Computer Interfaces, and AI
ASTRA 2025: Neuroimaging, Brain-Computer Interfaces, and AI
IEEE Computer Society Launches Software Professional Certification
IEEE Computer Society Launches Software Professional Certification
IEEE LCN 2025: Promoting Sustainability and Carbon Neutrality
IEEE LCN 2025: Promoting Sustainability and Carbon Neutrality
CS Juniors: Girls.comp Day
CS Juniors: Girls.comp Day
Read Next

IEEE CS High-Performance Computing Conference SC Recognized as Fastest Growing Event in 2025

ASTRA 2025: Neuroimaging, Brain-Computer Interfaces, and AI

IEEE Computer Society Launches Software Professional Certification

IEEE LCN 2025: Promoting Sustainability and Carbon Neutrality

CS Juniors: Girls.comp Day

The Stylist in the Machine: Shipping a Day-1 Fashion Recommender with LLMs

LinkedIn Profile Template

Quantum Insider Session Series: Choosing the Right Time and Steps to Start Working with Quantum Technologies

Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter