Submission deadline: 31 July 2022
Publication: April 2023
Artificial intelligence (AI) continues demonstrating its positive impact on society and successful adoptions in data-rich domains. The global AI market was valued at USD $62.35 billion in 2020 and is expected to grow with an annual growth rate of 40.2% from 2021 to 2028. Although AI is solving real-world challenges and transforming industries, there are serious concerns about its ability to make decisions in a responsible way, touching on lives and human values. Responsible AI is one of the greatest scientific challenges of our time and societies.
In addition to existing and emerging regulations, many ethical principles and guidelines for responsible AI have also been issued by governments, research organizations, and enterprises. However, high-level regulations and ethical principles are far from ensuring the trustworthiness of AI systems. The issues go beyond traditional software code “bugs” and theoretical guarantees for
algorithms. They require full lifecycle operationalized quality, regulatory compliance and ethical assurances from a software-engineering perspective. And even for a highly trustworthy AI system, gaining the trust from individuals, domain experts, and wider communities is another challenge that must be addressed carefully for the AI system to be widely accepted.
In this special issue, we are looking for cutting-edge software-engineering methods, techniques, tools, and real-world case studies that can help operationalize responsible AI. Topics of interests include, but are not limited to:
- Requirement engineering for responsible AI
- Software architecture and design of responsible AI systems
- Software verification and validation for responsible AI systems
- DevOps, AIOps, and MLOps for responsible AI systems
- Development processes for responsible AI systems
- Governance of responsible AI
- Software engineering for explainable AI
- Reproducibility and traceability of AI
- Trust and trustworthiness of AI systems
- Human-centric AI systems and human values in AI systems
For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.
Contact the guest editors at email@example.com.
Dr. Qinghua Lu, CSIRO’s Data61, Australia
Prof. Liming Zhu, CSIRO’s Data61, Australia
Prof. Jon Whittle, CSIRO’s Data61, Australia
Prof. Bret Michael, Naval Postgraduate School, USA