CLOSED Call for Papers: Special Issue on Human-Centered Visualization Approaches to AI Explainability, Interpretability, Understanding, and Ethics

Share this on:
Submissions Due: 28 March 2022

Submissions due: 28 March 2022

Publication: September/October 2022

The rapid increase in applications of artificial intelligence (AI) and machine-learning (ML) algorithms exacerbates the need for explainable data visualization methods that make algorithmic guidance transparent and understandable. The availability of explainable AI/ML methods is important to a wide range of domains, especially in highly regulated applications such as banking and financial services, healthcare, transportation, and defense. By harnessing the high bandwidth human perceptual channel, explainable AI/ML methods that utilize human-centered data visualization techniques represent a viable solution for increasing the adoption and reliability of new predictive AI and ML algorithms. A significant challenge for explainable AI methods is the transformation of black-box AI and ML technologies into glass-box solutions that humans can understand, ideally trust, and effectively manage in practical applications. Successful deployments of explainable AI/ML data visualization techniques reveal which features are most important to a model, such as model bias, performance measures related to drift and accuracy, and adoption risks. Notwithstanding, building trust in AI methods also requires responsibility, accountability, and fairness of algorithms. There is certainly still a long way to go from explaining AI and ML to data analysts to conveying trust and transparency to domain experts in practical applications.

This special issue seeks high-quality articles on the use of data visualization in support of a transformation towards transparency and trust. The predominant focus is on the use of data visualization to explain AI and ML models and systems in an understandable and accurate way, thus supporting their interpretability and increasing trust in their application, whether it is during the design and development phase of a model, during its training and execution, or in a post-hoc phase focusing on reporting and regulatory due diligence.

Areas of interest include, but are not limited to, visualization and human-computer interaction approaches to:

  • Pre-modeling, modeling, and post-modeling explainability and interpretability of AI and ML
  • Model performance management and steering
  • Model risk management and model validation
  • Identification, analysis, and reporting of data and model bias
  • Visual representations of data and model quality and uncertainty
  • Identification, representation, and mitigation of human bias in the analytical process
  • Visualization for communicating AI modeling processes and decision-making dependencies
  • AI literacy development, education, and training of technical and non-technical audiences
  • Causality and inference in AI/ML applications
  • Industry and business applications in regulated environments

Submission Guidelines

Visit the CG&A Author Information page for how to submit a manuscript. Please submit your papers through the ScholarOne online system and be sure to select the special-issue name as the manuscript type. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by email to the guest editors directly.

Questions?

Contact the guest editors at cga5-2022@computer.org.

Guest Editors: