2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER) (2018)
March 20, 2018 to March 23, 2018
Luca Pascarella , Delft University of Technology, The Netherlands
Fabio Palomba , University of Zurich, Switzerland
Alberto Bacchelli , University of Zurich, Switzerland
Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results. In this study, we first replicate previous research on method-level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.
Computer bugs, Measurement, Predictive models, Software systems, Complexity theory, History
L. Pascarella, F. Palomba and A. Bacchelli, "Re-evaluating method-level bug prediction," 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER), Campobasso, Italy, 2018, pp. 592-601.