• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Co
  • Home
  • / ...
  • /Magazines
  • /Co

CLOSED: Call for Papers: Special Issue on AI Failures: Causes, Implications, and Prevention

Featured ImageFeatured Image

Important Dates

  • Submissions due: 1 May 2024
  • Publication: November 2024


In the past decade, we have seen exponential growth in intelligent and autonomous systems development and deployment. Along with this fast proliferation, we are witness to continued rise in reports of autonomous learning system failures, malfunctions, and undesirable outcomes. Multiple efforts to log these failures have also been initiated.

We learn more from analyzing failures in engineering than by studying successes. There is significant value in documenting and tracking AI failures in sufficient detail to understand their root causes, and to put processes and practices in place toward preventing similar problems in the future. Efforts to track and record vulnerabilities in traditional software led to the establishment of National Vulnerability Database, which has contributed toward understanding vulnerability trends, their root causes, and how to prevent them.

Computer magazine is soliciting papers for a special issue on AI Failures: Causes, Implications, and Prevention. This special issue will explore AI failures, from early systems to recent ones. Papers should discuss the causes of the failures, their implications for the field of AI, and what can be learned from them.

Topics of interest include, but are not limited to:

  • Specific AI systems that have failed
    • Autonomous vehicles
    • Diagnostic Systems
    • Medical devices
    • Decision aiding tools
    • Recommendation systems
    • Robotics

  • The causes of AI failures
    • Inadequate training data
    • Testing failures
    • Human interaction with AI/machines
    • Adversarial attacks on AI systems
    • Transfer learning problems and evolution of use, environment

  • The implications of AI failures
    • Trust/acceptance
    • Societal and legal implications
    • Quantification of loss from AI failures
    • Economic impact
    • Research directions
    • Regulatory Issues

  • What can be learned from the failures
    • Root cause analysis
    • Fault tolerance techniques
    • Testing methods and adequacy
    • Importance of assurance metrics and methods

  • How to avoid AI failures in the future
    • Documentation and reporting of failures
    • Safety/security analysis methods for AI/ML
    • Explainability integration

Submissions should be original and unpublished. 


Submission Guidelines

For author information and guidelines on submission criteria, visit the Author's Information page. Please submit papers through the ScholarOne system  and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by email to the guest editors directly.


Questions?

Contact the guest editors at co11-24@computer.org.

  • M S Raunak – raunak@nist.gov
  • Rick Kuhn – kuhn@nist.gov

LATEST NEWS
Computing’s Top 30: Bala Siva Sai Akhil Malepati
Computing’s Top 30: Bala Siva Sai Akhil Malepati
The Art of Code Meets the Standards of Science: Why SWEBOK Matters
The Art of Code Meets the Standards of Science: Why SWEBOK Matters
Re-Engineering Cloud-Native Principles for Safety-Critical Software Systems
Re-Engineering Cloud-Native Principles for Safety-Critical Software Systems
Reliability as a First-Class Software Engineering Requirement
Reliability as a First-Class Software Engineering Requirement
Case Study: Leveraging Large Language Models to Enhance Data Acquisition Software Quality in Oil & Gas Industry
Case Study: Leveraging Large Language Models to Enhance Data Acquisition Software Quality in Oil & Gas Industry
Read Next

Computing’s Top 30: Bala Siva Sai Akhil Malepati

The Art of Code Meets the Standards of Science: Why SWEBOK Matters

Re-Engineering Cloud-Native Principles for Safety-Critical Software Systems

Reliability as a First-Class Software Engineering Requirement

Case Study: Leveraging Large Language Models to Enhance Data Acquisition Software Quality in Oil & Gas Industry

Quantum Insider Session Series: The Quantum Imperative

The Evolution of S&P Magazine

How to Stand Out in Today's Competitive Software Engineering Job Market

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter