This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Building Effective Defect-Prediction Models in Practice
November/December 2005 (vol. 22 no. 6)
pp. 23-29
A. G? Koru, University of Maryland, Baltimore County
Hongfang Liu, University of Maryland, Baltimore County
Predicting defect-prone modules successfully can help software developers improve product quality by focusing quality assurance activities on those modules. We built several machine-learning models to predict the defective modules in five software products developed by NASA, named, CM1, JM1, KC1, KC2, and PC1. Using a set of static measures as predictor variables, the models failed to predict performance satisfactorily on the products' original data sets. However, these data sets used the smallest unit of functionality--that is, a function or method--as a module. This meant the defect prediction was performed at a fine granularity level. Stratifying the original data sets according to module size showed the prediction performance to be better in subsets that included larger modules. Aggregating the method-level KC1 data to class level improved prediction performance for the top defect classes. Guidelines based on these results help software developers build effective defect-prediction models for focused quality assurance activities.

This article is part of a special issue on predictor modeling.

Index Terms:
software quality, software metrics
Citation:
A. G? Koru, Hongfang Liu, "Building Effective Defect-Prediction Models in Practice," IEEE Software, vol. 22, no. 6, pp. 23-29, Nov.-Dec. 2005, doi:10.1109/MS.2005.149
Usage of this product signifies your acceptance of the Terms of Use.