Issue No. 06 - June (1997 vol. 23)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/32.601071
<p><b>Abstract</b>—We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: 1) to describe the experiment's design and show how we used simulation techniques to optimize it, 2) to present our results and discuss their implications for both software practitioners and researchers, and 3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: 1) the number of reviewers on each inspection team (1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2), and 3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.</p>
Software inspection, controlled experiments, industrial experimentation, ANOVA, power analysis.
Adam A. Porter, Harvey P. Siy, Carol A. Toman, Lawrence G. Votta, "An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development", IEEE Transactions on Software Engineering, vol. 23, no. , pp. 329-346, June 1997, doi:10.1109/32.601071