This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development
June 1997 (vol. 23 no. 6)
pp. 329-346

Abstract—We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: 1) to describe the experiment's design and show how we used simulation techniques to optimize it, 2) to present our results and discuss their implications for both software practitioners and researchers, and 3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: 1) the number of reviewers on each inspection team (1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2), and 3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.

[1] K. Ballman and L.G. Votta, "Organizational Congestion in Large Scale Software Development," Third Int'l Conf. Software Process, pp. 123-134, Oct. 1994.
[2] D.B. Bisant and J.R. Lyle, A Two-Person Inspection Method to Improve Programming Productivity IEEE Trans. Software Eng., vol. 15, no. 10, pp. 1294-1304, Oct. 1989.
[3] F.O. Buck, "Indicators of Quality Inspections," Technical Report 21.802, IBM Systems Products Division, Kingston, N.Y., Sept. 1981.
[4] K.P. Burnham and W.S. Overton, "Estimation of the Size of a Closed Population when Capture Probabilities Vary Among Animals," Biometrika, vol. 65, pp. 625-633, 1978.
[5] J.M. Chambers, W.S. Cleveland, B. Kleiner, and P.A. Tukey, Graphical Methods for Data Analysis.Belmont, Calif.: Wadsworth Int'l Group, 1983.
[6] S.G. Eick, C.R. Loader, M.D. Long, S.A. Vander Wiel, and L.G. Votta, "Estimating Software Fault Content Before Coding," Proc. 14th Int'l Conf. Software Eng., pp. 59-65, May 1992.
[7] S.G. Eick, C.R. Loader, M.D. Long, Scott A Vander Wiel, and L.G. Votta, "Capture-Recapture and Other Statistical Methods for Software Inspection Data," Computing Science and Statistics: Proc. 25th Symp. Interface,San Diego, Calif., Interface Foundation of North America, Mar. 1993.
[8] M.E. Fagan, "Design and Code Inspections to Reduce Errors in Program Development," IBM Systems J., vol. 15, no. 3, pp. 182-211, 1976.
[9] M.E. Fagan, "Design and Code Inspections to Reduce Errors in Program Development," IBM Systems J., vol. 15, no. 3, pp. 216-245, 1976.
[10] W.S. Humphrey, Managing the Software Process, Addison-Wesley, Reading, Mass., 1989.
[11] IEEE Standard for Software Reviews and Audits. Software Eng. Tech. Comm. of the IEEE Computer Society, 1989. IEEE Std 1028-1988.
[12] C.M. Judd, E.R. Smith, and L.H. Kidder, Research Methods in Social Relations. Holt, Rinehart, and Winston, Inc., Fort Worth, Tex., sixth edition, 1991.
[13] J. Knight and E.A. Myers, "An Improved Inspection Technique," Comm. ACM, vol. 36, no. 11, pp. 51-61, Nov. 1993.
[14] J. Knight and E.A. Myers, "An Improved Inspection Technique," Comm. ACM, vol. 36, no. 11, pp. 51-61, Nov. 1993.
[15] K.E. Martersteck and A.E. Spencer, "Introduction to the 5ESS(TM) Switching System," AT&T Technical J., vol. 64,(6 part 2), pp. 1,305-1,314, July-Aug. 1985.
[16] D.L. Parnas and D.M. Weiss, "Active Design Reviews: Principles and Practices," Proc. Eighth Int'l Conf. Software Eng., pp. 215-222, Aug. 1985.
[17] K.H. Pollock, "Modeling Capture, Recapture, and Removal Statistics for Estimation of Demographic Parameters for Fish and Wildlife Populations: Past, Present, and Future," J. Am. Statistical Assoc., vol. 86, no. 413, pp. 225-238, Mar. 1991.
[18] A.A. Porter, L.G. Votta, and V.R. Basili, "An Experiment to Assess Different Defect Detection Methods for Software Requirements Inspections," Proc. 16th Int'l Conf. Software Eng., 1994, pp. 103-112.
[19] G.M. Schnieder, J. Martin, and W.T. Tsai, "An Experimental Study of Fault Detection in User Requirements," ACM Trans. Software Eng. and Methodology, vol. 1, no. 2, pp. 188-204, Apr. 1992.
[20] G.M. Schnieder, J. Martin, and W.T. Tsai, "An Experimental Study of Fault Detection in User Requirements," ACM Trans. Software Eng. and Methodology, vol. 1, no. 2, pp. 188-204, Apr. 1992.
[21] S. Siegel and Jr. N.J. Castellan, Nonparametric Statistics for the Behavioral Sciences.New York: McGraw-Hill, second edition, 1988.
[22] S.A. Vander Wiel and L.G. Votta, "Assessing Software Design Using Capture-Recapture Methods." IEEE Trans. Software Eng., vol. 19, pp. 1,045-1,054, 1993.
[23] L.G. Votta, "Does Every Inspection Need a Meeting?" ACM Software Eng. Notes, vol. 18, no. 5, Dec. 1993, pp. 107-114.
[24] A.L. Wolf and D.S. Rosenblum, "A Study in Software Process Data Capture and Analysis," Proc. Second Int'l Conf. Software Process, pp. 115-124. IEEE Computer Society, Feb. 1993.

Index Terms:
Software inspection, controlled experiments, industrial experimentation, ANOVA, power analysis.
Citation:
Adam A. Porter, Harvey P. Siy, Carol A. Toman, Lawrence G. Votta, "An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development," IEEE Transactions on Software Engineering, vol. 23, no. 6, pp. 329-346, June 1997, doi:10.1109/32.601071
Usage of this product signifies your acceptance of the Terms of Use.