[CLOSED] Call for Papers: Special Issue on Explainable AI for Software Engineering (XAI4SE)

Share this on:
Submissions Due: 21 November 2022

Important Dates

Submissions Due: 21 November 2022

Publication: May/June 2023


Artificial Intelligence/Machine Learning (AI/ML) have been widely used in software engineering to automatically provide recommendations to improve developer productivity, software quality, and decision-making. This includes tools like code completion (like GitHub’s Copilot, Amazon’s Code Whisperer), code search, automated task recommendation, automated developer recommendation, automated defect/vulnverability/malware prediction, detection, localization, and repairs, and many more. 

However, many of these solutions are still not practical, explainable, and actionable. A lack of explainability often leads to a lack of trust in the predictions of AI/ML models in SE, which in turn hinders the adoption of AI/ML models in real-world software development practices [1,2,3,7,13]. This problem is especially more pronounced for modern pre-trained language models of code that are large, black-box, and complex in nature like CodeBERT, GraphCodeBERT, CodeGPT, CodeT5, etc. Therefore, Explainable AI for SE is a pressing concern for the software industry and academia. In the light of predictions made in SE contexts, practitioners would like to know Why has this code been generated? Why is this person best suited for this task? Why is this file predicted as defective? Why is this task required the highest development effort?, etc. 

Recently, a practitioner’s survey study [3] found that explanations from AI/ML models in SE are critically needed, yet remain largely unexplored. Recent work also showed that explainable AI techniques can be used to make AI/ML models for software engineering more practical [4,7,8,9], explainable [5,7,12], actionable [6,7,8,9,10,11], while being able to improve the AI/ML model’s quality [12,13], model fairness, discrimination and bias [14,15]. However, XAI4SE is an emerging research topic. Thus, this theme issue is calling for papers that broadly cover the following topics (but are not limited to):

  • Empirical studies on the need, motivation, and challenges of explainable AI for SE
  • Novel theories, tools, and techniques for generating textual or visual explanations for SE tasks (e.g., what is the best form of explanations for software engineering tasks that are most understandable by software practitioners?)
  • Empirical studies or short reflection articles on the fundamentals of human-centric XAI design that incorporates aspects of psychology, learning theories, cognitive science, and social sciences
  • Novel explainable AI techniques or applications to new SE tasks that serve various purposes, e.g., testing, debugging, visualizing, interpreting, and refining AI/ML models in SE
  • Explainable AI methods to detect and explain potential biases when appliting AI tools in SE
  • Novel evaluation frameworks of explainable AI techniques for SE tasks
  • Empirical studies to investigate if different stakeholders need different explanations
  • Empirical studies on the impact of using explainable AI techniques in software development practices
  • Empirical studies of human-centric explainable AI for software engineering
  • Practical guidelines of increasing the explainability of AI/ML models in software engineering
  • Visions, reflections, industrial case studies, experience reports, and lessons learned of explainable AI for software engineering
  • Papers reporting negative or inconclusive results in any of the above areas are also encouraged to be submitted with lessons learned and implications for future studies

Submission Guidelines

For author information and guidelines on submission criteria, please visit the Software’s Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Questions?

For more information contact the guest editors at sw3-2023@computer.org.

Guest Editors:

  • Chakkrit (Kla) Tantithamthavorn
  • Jürgen Cito
  • Hadi Hemmati
  • Satish Chandra