Call for Papers: Special Issue on Next-generation Software Testing: AI-powered Test Automation

IEEE Software seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 15 August 2024

Important Dates

  • Submissions: 15 August 2024

Publication: May/June 2025


The provocative question we are interested in is as follows: 

Can we really ask a Computer to test software systems without human intervention?

Our hypothesis is that AI can perform a whole series of tasks such as design, construction, run and maintain automated test suites and in some cases replace the human being to improve Software Testers’ life.

In this special issue, we want to collect scientific and industrial works aimed at investigating the synergy between AI and software testing and how AI is reshaping Test Automation. In particular, the main goal is to better understand this still unexplored phenomenon and collect the innovative solutions proposed by AI and how these are put into practice in the available testing tools/frameworks. 

We invite article submissions covering all aspects of the synergy between Artificial intelligence (AI), machine learning (ML), and software testing and how AI is reshaping test automation, including, but not limited to: 

  • AI-powered testing tools and frameworks, and general support for test automation
  • Novel AI based solutions and limitations of traditional automated testing approaches
  • Usage of Large Language Models (e.g., ChatGPT) in software testing
  • Test case and test script generation based on AI
  • Machine Learning and Artificial Intelligence applied to test automation  
  • Automated generation of test oracles
  • Test execution automation
  • Quality aspects of using AI for Test automation (e.g., to improve APFD metric and coverage)
  • Testing in an Agile and CI contexts, and testing within DevOps
  • Analytics, learning, and big data in relation to test automation
  • Metrics, benchmarks, and estimation on any type of AI-powered Test Automation
  • Maintainability, monitoring, and refactoring of automated AI-based test suites
  • AI-powered Test Automation patterns
  • Test automation maturity and experience reports on AI-powered Test Automation
  • Evolution of automated AI-based test suites

Submission Guidelines

Manuscripts must not exceed 4,200 words, including figures and tables, which count for 250 words each. Submissions in excess of these limits may be rejected without refereeing. The articles we deem within the theme and scope will be peer reviewed and are subject to editing for magazine style, clarity, organization, and space. Be sure to include the name of the theme you’re submitting for. Articles should have a practical orientation and be written in a style accessible to practitioners. Overly complex, purely research-oriented, or highly theoretical aren’t appropriate, however articles providing scientific evidence are welcome if they focus on practical and industrial contexts. IEEE Software doesn’t republish material published previously in other venues, including other periodicals and formal conference or workshop proceedings, whether previous publication was in print or electronic form.

 


Questions?

Contact the guest editors at sw3-25@computer.org.

  • Filippo Ricca (University of Genova, Italy). 
  • Boni García (Universidad Carlos III de Madrid, Spain). 
  • Michel Nass (Blekinge Institute of Technology, Sweden)
  • Mark Harman (Research Scientist at Meta & Professor of Software Engineering at UCL)