• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Cg
  • Home
  • / ...
  • /Magazines
  • /Cg

CLOSED Call for Papers: Special Issue on Human-Centered Visualization Approaches to AI Explainability, Interpretability, Understanding, and Ethics

Submissions due: 28 March 2022

Publication: September/October 2022

The rapid increase in applications of artificial intelligence (AI) and machine-learning (ML) algorithms exacerbates the need for explainable data visualization methods that make algorithmic guidance transparent and understandable. The availability of explainable AI/ML methods is important to a wide range of domains, especially in highly regulated applications such as banking and financial services, healthcare, transportation, and defense. By harnessing the high bandwidth human perceptual channel, explainable AI/ML methods that utilize human-centered data visualization techniques represent a viable solution for increasing the adoption and reliability of new predictive AI and ML algorithms. A significant challenge for explainable AI methods is the transformation of black-box AI and ML technologies into glass-box solutions that humans can understand, ideally trust, and effectively manage in practical applications. Successful deployments of explainable AI/ML data visualization techniques reveal which features are most important to a model, such as model bias, performance measures related to drift and accuracy, and adoption risks. Notwithstanding, building trust in AI methods also requires responsibility, accountability, and fairness of algorithms. There is certainly still a long way to go from explaining AI and ML to data analysts to conveying trust and transparency to domain experts in practical applications.

This special issue seeks high-quality articles on the use of data visualization in support of a transformation towards transparency and trust. The predominant focus is on the use of data visualization to explain AI and ML models and systems in an understandable and accurate way, thus supporting their interpretability and increasing trust in their application, whether it is during the design and development phase of a model, during its training and execution, or in a post-hoc phase focusing on reporting and regulatory due diligence.

Areas of interest include, but are not limited to, visualization and human-computer interaction approaches to:

  • Pre-modeling, modeling, and post-modeling explainability and interpretability of AI and ML
  • Model performance management and steering
  • Model risk management and model validation
  • Identification, analysis, and reporting of data and model bias
  • Visual representations of data and model quality and uncertainty
  • Identification, representation, and mitigation of human bias in the analytical process
  • Visualization for communicating AI modeling processes and decision-making dependencies
  • AI literacy development, education, and training of technical and non-technical audiences
  • Causality and inference in AI/ML applications
  • Industry and business applications in regulated environments

Submission Guidelines

Visit the CG&A Author Information page for how to submit a manuscript. Please submit your papers through the ScholarOne online system and be sure to select the special-issue name as the manuscript type. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by email to the guest editors directly.

Questions?

Contact the guest editors at cga5-2022@computer.org.

Guest Editors:

  • Miguel Encarnação, Regions Bank, USA, miguel.encarnacao@regions.com
  • Jörn Kohlhammer, Fraunhofer IGD and TU Darmstadt, Germany, joern.kohlhammer@igd.fraunhofer.de
  • Chad Steed, Regions Bank, USA, chad.steed@regions.com

LATEST NEWS
AI Assisted Identity Threat Detection and Zero Trust Access Enforcement
AI Assisted Identity Threat Detection and Zero Trust Access Enforcement
Resume Template
Resume Template
IEEE Reveals 2026 Predictions for Top Technology Trends 
IEEE Reveals 2026 Predictions for Top Technology Trends 
7 Best Practices for Secure Software Engineering in 2026
7 Best Practices for Secure Software Engineering in 2026
Muzeeb Mohammad: IEEE Computer Society Leader in Cloud Tech
Muzeeb Mohammad: IEEE Computer Society Leader in Cloud Tech
Read Next

AI Assisted Identity Threat Detection and Zero Trust Access Enforcement

Resume Template

IEEE Reveals 2026 Predictions for Top Technology Trends 

7 Best Practices for Secure Software Engineering in 2026

Muzeeb Mohammad: IEEE Computer Society Leader in Cloud Tech

Setting the Standard: How SWEBOK Helps Organizations Build Reliable and Future-Ready Teams

Computing’s Top 30: Bala Siva Sai Akhil Malepati

The Art of Code Meets the Standards of Science: Why SWEBOK Matters

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter