The scale of software complexity is a near consensus from academics, industries, and governments that the influence is on how we make decisions in software systems, and how software systems interact with humans. Many technologies have been advanced to address the challenges. In this special issue, we aim to explore the opportunities to bring observability and explainability towards the improvement of transparency of complex and large-scale software systems. Observability sustains in-time, continuous and configurable reconstruction of the internal states of software systems at varying degrees of granularity across a system. Explainability mostly focuses on the post hoc approximation of the system’s behavior relating to inputs and chosen outputs. Observability and explainability each line up inimitable views on transparency. Their synergy addresses not only data-driven solutions but also pave a path to achieve broader missions in software systems including but not limited to tracing the causality, accountable decision-making and auditing, automation and self-organization, and collaborative intelligence among human, software system, and large-scale machine learning.
We invite article submissions covering all aspects of observability and explainability for software systems decision making including, but not limited to:
For author information and guidelines on submission criteria, please visit the Software's Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.
Please contact the guest editors at sw1-24@computer.org.
Guest Editors: