CLOSED Call for Papers: Special Issue on Trustworthy AI

Share this on:
Submissions Due: 31 March 2022

Submission deadline: 31 March 2022
Publication: November 2022

Artificial intelligence (AI) is powering change in every industry across the globe. Fueled by exponential growth in data, computing power, and network capacity, businesses and organizations across industries are integrating AI to dramatically improve the speed and efficiency of business operations. AI is improving healthcare, optimizing commerce, enabling safer robotics platforms, and providing autonomy to future mobility systems. Unlocking the full potential of AI will require evidence that it can be trusted. AI remains one of the most promising technologies of the future, but as with any emerging technology, concerns about potential risks must be addressed. Governments, organizations, and experts across the world are working hard to promote trustworthy AI, but in order to prevent an AI “trust gap,” it will require all stakeholders to work together and react quickly.

The term “trustworthiness” should refer to all stakeholders, including the AI providers, the users, and the AI system itself. An AI system can be trusted with regards to accuracy and safety, its impact on human autonomy and privacy, whether AI will treat people fairly, and if it can explain why it generates its decisions. The trustworthiness of an AI provider is broader, since it refers to an organization being trusted to implement appropriate measures and maintain sufficient management structures in order to deliver on the promise of trustworthy AI.

Topics of interest for this special issue include (but are not limited to):

  • Human-centered values. How can AI systems empower human beings, support them to make informed decisions, and foster their fundamental rights? Is it possible to ensure proper oversight mechanisms? How can communities that are subjected to the outcomes of AI systems have a voice in their development and use?
  • Sustainability. Can it be ensured that AI systems are designed, developed, and used in a sustainable and environmentally friendly way? How can AI take into account its social and societal impact?
  • Transparency and Explainability. Can a human understand how the AI system works? Can the AI system explain, in a human-understandable language, how it arrived at its output?
  • Safety, Dependability, and Minimization of Harm. How can we ensure the proper internal functioning of an AI system, while avoiding safety-critical mistakes, unintended harm, and malicious threats?
  • Accountability. What technical measures may be taken to ensure that the potential negative impact of an AI system is identified? What mechanisms should be put in place to ensure responsibility and accountability of AI systems’ behavior? How can we define and support the auditability of AI systems?

Submission Guidelines

For author information and guidelines on submission criteria, please visit Computer‘s Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.

Questions?

Please direct any correspondence before submission to the guest editors at co11-22@computer.org.

Guest Editors

  • Riccardo Mariani, IEEE CS First VP (Italy) and NVIDIA (Italy)
  • Barnaby Simkin, NVIDIA (Germany)
  • Marco Pavone, Stanford University (USA) and NVIDIA (USA)
  • Rita Cucchiara, Università’ degli Studi di Modena e Reggio Emilia (Italy)
  • Francesca Rossi, IBM (USA)
  • Ansgar Koene, EY (UK) and University of Nottingham (UK)
  • Jochen Papenbrock, NVIDIA