This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Determining Inspection Cost-Effectiveness by Combining Project Data and Expert Opinion
December 2005 (vol. 31 no. 12)
pp. 1074-1092
There is a general agreement among software engineering practitioners that software inspections are an important technique to achieve high software quality at a reasonable cost. However, there are many ways to perform such inspections and many factors that affect their cost-effectiveness. It is therefore important to be able to estimate this cost-effectiveness in order to monitor it, improve it, and convince developers and management that the technology and related investments are worthwhile. This work proposes a rigorous but practical way to do so. In particular, a meaningful model to measure cost-effectiveness is proposed and a method to determine cost-effectiveness by combining project data and expert opinion is described. To demonstrate the feasibility of the proposed approach, the results of a large-scale industrial case study are presented and an initial validation is performed.

[1] A.F Ackerman, L.S. Buchwald, and F.H. Lewsky, “Software Inspections: An Effective Verification Process,” IEEE Software, vol 6, no. 3, pp. 31-36, 1989.
[2] L. Briand, K. El Emam, O. Laitenberger, and T. Fußbroich, “Using Simulation to Build Inspection Efficiency Benchmarks for Development Projects,” Proc. 20th Int'l Conf. Software Eng., pp. 340-349, 1998.
[3] L. Briand, K. El Emam, and F. Bomarius, “COBRA: A Hybrid Method for Software Cost Estimation, Benchmarking, and Risk Assessment,” Proc. 20th Int'l Conf. Software Eng., pp. 390-399, 1998.
[4] L. Briand, B. Freimut, and F. Vollei, Assessing the Cost-Effectiveness of Inspections by Combining Project Data and Expert Opinion,” Proc. 11th Int'l Symp. Software Reliability Eng., pp. 124-135, 2000.
[5] J. Collofello and S. Woodfield, “Evaluating the Effectiveness of Reliability-Assurance Techniques,” J. Systems and Software, vol. 9, no. 3, pp. 191-195, 1989.
[6] R.G. Ebenau and S.H. Strauss, Software Inspection Process. McGraw Hill, 1993.
[7] N. Fenton and S. Pfleeger, Software Metrics: A Rigorous and Practical Approach. Thomson Computer Press, 1996.
[8] L.A. Franz and J.C. Shih, “Estimating the Value of Inspections and Early Testing for Software Projects, CS-TR- 6,” Hewlett-Packard J., Dec. 1994.
[9] T. Gilb and D. Graham, Software Inspection. Addison-Wesley, 1993.
[10] R. Grady, Practical Software Metrics for Project Management and Process Improvement. Prentice-Hall, 1992.
[11] R. Grady and T. van Slack, “Key Lessons in Achieving Widespread Inspection Use,” IEEE Software, vol 11., no 4, pp. 46-57, 1994.
[12] E. Hofer, “On Surveys of Expert Opinion,” Nuclear Eng. and Design, vol. 93, nos. 2-3, pp. 153-160, 1986.
[13] S.C. Hora and R.L. Iman, “Expert Opinion in Risk Analysis: The NUREG-1150 Methodology,” Nuclear Science and Eng., vol. 102, pp. 323-331, Aug. 1989.
[14] M. Höst and C. Wohlin, “A Subjective Effort Estimation Experiment,” Int'l J. Information and Software Technology, vol. 39, no. 11, pp. 755-762, 1997.
[15] P. Jalote and M. Haragopal, “Overcoming the NAH Syndrome for Inspection Deployment,” Proc. 20th Int'l Conf. Software Eng., pp. 371-78, 1998.
[16] C. Jones, Software Quality. Thompson Computer Press, 1997.
[17] M. Jorgensen, “A Review of Studies on Expert Estimation of Software Development Effort,” J. Systems and Software, vol. 70, pp. 37-60, 2004.
[18] Judgement under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic, and A. Tversky, eds., Cambridge Univ. Press, 1982.
[19] D.H. Kitson and S.M. Masters, “An Analysis of SEI Software Process Assessment Results: 1987-1991,” Proc. 15th Int'l Conf. Software Eng., pp. 68-77, 1993.
[20] S. Kusumoto, K. Matsumoto, T. Kikuno, and K. Torii, “A New Metric for Cost-Effectiveness of Software Reviews,” IEICE Trans. Information and Systems, vol. E75-D, no. 5, pp. 674-680, 1992.
[21] O. Laitenberger, S. Vegas, and M. Ciolkowski, “The State of the Practice of Review and Inspection Technologies in Germany,” Technical Report ViSEK/011/E, Virtuelles Software Eng. Kompetenzzentrum (ViSEK), 2002.
[22] M.A. Meyer and J.M. Booker, Eliciting and Analyzing Expert Judgment: A Practical Guide. Academic Press, Ltd., 1991.
[23] A. Mosleh, V.M. Bier, and G. Apostolakis, “The Elicitation and Use of Expert Opinion in Risk Assessment: A Critical Review,” Proc. Conf. Probabilistic Safety Assessment and Risk Management: PSA '87, vol. 1 of 3, pp. 152-158, 1987.
[24] A.N. Oppenheim, Questionnaire Design, Interviewing and Attitude Measurement. Pinter Publishers, 1992.
[25] Palisade Corporation, @Risk— Advanced Risk Analysis for Spreadsheets, 2000.
[26] R.A. Radice, High Quality Low Cost Software Inspections. Paradoxixon Publishing, 2002.
[27] D.F. Rico, ROI of Software Process Improvement. J. Ross Publishing, 2004.
[28] T. Rosqvist, M. Koskela, and H. Harju, “Software Quality Evaluation Based on Expert Judgement,” Software Quality J., vol. 11, pp. 39-55, 2003.
[29] T. Saaty, The Analytic Hierarchy Process. McGraw-Hill, 1990.
[30] G. Sabaliauskaite, “Investigating Defect Detection in Object-Oriented Design and Cost-Effectiveness of Software Inspection,” PhD Thesis, 2004.
[31] R. van Solingen, “Measuring the ROI of Software Process Improvement,” IEEE Software, pp. 32-38, May 2004.
[32] D. Vose, Quantitative Risk Analysis: A Guide to Monte Carlo Simulation Modeling. John Wiley Sons, 1996.

Index Terms:
Index Terms- Software inspection, cost-effectiveness model, Monte Carlo simulation, case study, expert opinion.
Citation:
Bernd Freimut, Lionel C. Briand, Ferdinand Vollei, "Determining Inspection Cost-Effectiveness by Combining Project Data and Expert Opinion," IEEE Transactions on Software Engineering, vol. 31, no. 12, pp. 1074-1092, Dec. 2005, doi:10.1109/TSE.2005.136
Usage of this product signifies your acceptance of the Terms of Use.