This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Investigating the Defect Detection Effectiveness and Cost Benefit of Nominal Inspection Teams
May 2003 (vol. 29 no. 5)
pp. 385-397

Abstract—Inspection is an effective but also expensive quality assurance activity to find defects early during software development. The defect detection process, team size, and staff hours invested can have a considerable impact on the defect detection effectiveness and cost-benefit of an inspection. In this paper, we use empirical data and a probabilistic model to estimate this impact for nominal (noncommunicating) inspection teams in an experiment context. Further, the analysis investigates how cutting off the inspection after a certain time frame would influence inspection performance. Main findings of the investigation are: 1) Using combinations of different reading techniques in a team is considerably more effective than using the best single technique only (regardless of the observed level of effort). 2) For optimizing the inspection performance, determining the optimal process mix in a team is more important than adding an inspector (above a certain team size) in our model. 3) A high level of defect detection effectiveness is much more costly to achieve than a moderate level since the average cost for the defects found by the inspector last added to a team increases more than linearly with growing effort investment. The work provides an initial baseline of inspection performance with regard to process diversity and effort in inspection teams. We encourage further studies on the topic of time usage with defect detection techniques and its effect on inspection effectiveness in a variety of inspection contexts to support inspection planning with limited resources.

[1] V.R. Basili, S. Green, O. Laitenberger, F. Lanubile, F. Shull, S. Soerumgaard, and M. Zelkowitz, “The Empirical Investigation of Perspective-Based Reading,” Empirical Software Eng. J., vol. 1, no. 2, pp. 133-164, 1996.
[2] S. Biffl and M. Halling, “Software Product Improvement with Inspection,” Proc. Euromicro 2000 Track on Software Product and Process Improvement, Sept. 2000.
[3] S. Biffl and W. Gutjahr, “Investigating the Influence of Team Size and Defect Detection Methods on the Inspection Effectiveness of Nominal Teams,” Proc. IEEE Int'l Software Metrics Symp., Apr. 2001.
[4] S. Biffl, B. Freimut, and O. Laitenberger, “Investigating the Cost-Effectiveness of Reinspections in Software Development,” Proc. Int'l Conf. Software Eng., 2001.
[5] S. Biffl and M. Halling, "Investigating the Influence of Inspector Capability Factors with Four Inspection Techniques on Inspection Performance," Proc. 8th IEEE Int'l Software Metrics Symp., IEEE CS Press, 2002, pp. 107-117.
[6] L. Briand, K. El Emam, B. Freimut, and O. Laitenberger, “A Comprehensive Evaluation of Capture-Recapture Models for Estimating Software Defect Content,” IEEE Trans. Software Eng., vol. 26, no. 6, pp. 518-540, June 2000.
[7] M. Ciolkowski, C. Differding, O. Laitenberger, and J. Münch, “Empirical Investigation of Perspective-based Reading: A Replicated Experiment,” Technical ReportISERN-97-13, Fraunhofer Inst. for Experimental Software Eng., Germany, Int'l Software Eng. Research Network,www.iese.fhg.de, 1997.
[8] B. Curtis, “By the Way, Did Anyone Study any Real Programmers?” Empirical Studies of Programmers: First Workshop, pp. 256–262, 1986.
[9] T. Gilb and D. Graham, Software Inspection, Addison-Wesley, 1993.
[10] M. Halling, P. Grünbacher, and S. Biffl, “Tailoring a COTS Group Support System for Software Requirements Inspection,” Proc. Conf. Automated Software Eng. ASE, Nov. 2001.
[11] M. Höst, B. Regnell, and C. Wohlin, “Using Students as Subjects—A Comparative Study of Students and Professionals in Lead-Time Impact Assessment,” Empirical Software Eng., vol. 5, pp. 201-214, 2000.
[12] W. Humphrey, Introduction to the Team Software Process, Addison-Wesley, Reading, Mass., 2000.
[13] P.M. Johnson and D. Tjahjono, “Assessing Software Review Meetings: A Controlled Experimental Study Using CSRS,” Technical Report 96-06, Dept. of Information and Computer Sciences, Univ. of Hawaii,csdl.ics.hawaii.edu, 1996.
[14] P. Johnson and D. Tjahjono, “Does Every Inspection Really Need a Meeting?” Empirical Software Eng., vol. 3, pp. 9–35, 1998.
[15] O. Laitenberger and C. Atkinson, “Generalizing Perspective-Based Inspection to Handle Object-Oriented Development Artifacts,” Proc. Int'l Conf. Software Eng., 1999.
[16] O. Laitenberger and J. DeBaud, "An Encompassing Life-Cycle Centric Survey of Software Review," J. Systems and Software, vol. 50, no. 1, Jan. 2000, pp. 5-31.
[17] O. Laitenberger, “Cost-Effective Detection of Software Defects through Perspective-Based Inspections,” PhD thesis, Univ. of Kaiserslautern, Germany,www.iese.fhg.de, May 2000.
[18] J. Miller, M. Wood, and M. Roper, “Further Experiences with Scenarios and Checklists,” Empirical Software Eng., vol. 3, no. 1, pp. 37–64, 1998.
[19] A.A. Porter, L.G. Votta, and V.R. Basili, "An Experiment to Assess Different Defect Detection Methods for Software Requirements Inspections," Proc. 16th Int'l Conf. Software Eng., 1994, pp. 103-112.
[20] A.A. Porter, L.G. Votta, and V.R. Basili, “Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment,” IEEE Trans. Software Eng., vol. 21, no. 6, pp. 563-575, June 1995.
[21] A. Porter and L. Votta, ”Understanding the Sources of Variation in Software Inspections,” ACM Trans. Software Eng. and Methodology, vol. 7, no. 1, pp. 41–79, Jan. 1998.
[22] C. Robson, Real World Research. Blackwell, 1993.
[23] F.J. Shull, “Developing Techniques for Using Software Documents: A Series of Empirical Studies,” PhD thesis, Univ. of Maryland, College Park,, 1998.
[24] W. Tichy, “Hints for Reviewing Empirical Work in Software Engineering,” Empirical Software Eng. J., vol. 5, pp. 309-312, 2001.
[25] G. Travassos, F. Shull, M. Fredericks, and V. Basili, “Detecting Defects in Object-Oriented Designs: Using Reading Techniques to Increase Software Quality,” Proc. Conf. Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), 1999.
[26] L.G. Votta, "Does Every Inspection Need a Meeting?" ACM Software Eng. Notes, vol. 18, no. 5, Dec. 1993, pp. 107-114.
[27] D. Waters, Quantitative Methods for Business, second ed. Addison-Wesley, 1997.
[28] C. Wohlin, P. Runeson, M. Höst, M.C. Ohlsson, B. Regnell, and A. Wesslén, Experimentation in Software Engineering: An Introduction. Kluwer Academic, 2000.

Index Terms:
Software inspection, reading techniques, cost-benefit modeling, empirical software engineering.
Citation:
Stefan Biffl, Michael Halling, "Investigating the Defect Detection Effectiveness and Cost Benefit of Nominal Inspection Teams," IEEE Transactions on Software Engineering, vol. 29, no. 5, pp. 385-397, May 2003, doi:10.1109/TSE.2003.1199069
Usage of this product signifies your acceptance of the Terms of Use.