Call for Papers: Special Issue on AI Failures: Causes, Implications, and Prevention
Computer seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 1 May 2024
Submissions due: 1 May 2024
Publication: November 2024
In the past decade, we have seen exponential growth in intelligent and autonomous systems development and deployment. Along with this fast proliferation, we are witness to continued rise in reports of autonomous learning system failures, malfunctions, and undesirable outcomes. Multiple efforts to log these failures have also been initiated.
We learn more from analyzing failures in engineering than by studying successes. There is significant value in documenting and tracking AI failures in sufficient detail to understand their root causes, and to put processes and practices in place toward preventing similar problems in the future. Efforts to track and record vulnerabilities in traditional software led to the establishment of National Vulnerability Database, which has contributed toward understanding vulnerability trends, their root causes, and how to prevent them.
Computer magazine is soliciting papers for a special issue on AI Failures: Causes, Implications, and Prevention. This special issue will explore AI failures, from early systems to recent ones. Papers should discuss the causes of the failures, their implications for the field of AI, and what can be learned from them.
Topics of interest include, but are not limited to:
Specific AI systems that have failed
Decision aiding tools
The causes of AI failures
Inadequate training data
Human interaction with AI/machines
Adversarial attacks on AI systems
Transfer learning problems and evolution of use, environment
The implications of AI failures
Societal and legal implications
Quantification of loss from AI failures
What can be learned from the failures
Root cause analysis
Fault tolerance techniques
Testing methods and adequacy
Importance of assurance metrics and methods
How to avoid AI failures in the future
Documentation and reporting of failures
Safety/security analysis methods for AI/ML
Submissions should be original and unpublished.
For author information and guidelines on submission criteria, visit the Author’s Information page. Please submit papers through the ScholarOne system and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by email to the guest editors directly.