• Publication
  • 2001
  • Issue No. 5 - May
  • Abstract - An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents
 This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents
May 2001 (vol. 27 no. 5)
pp. 387-421

Abstract—The basic premise of software inspections is that they detect and remove defects before they propagate to subsequent development phases where their detection and correction cost escalates. To exploit their full potential, software inspections must call for a close and strict examination of the inspected artifact. For this, reading techniques for defect detection may be helpful since these techniques tell inspection participants what to look for and, more importantly, how to scrutinize a software artifact in a systematic manner. Recent research efforts investigated the benefits of scenario-based reading techniques. A major finding has been that these techniques help inspection teams find more defects than existing state-of-the-practice approaches, such as, ad-hoc or checklist-based reading (CBR). In this paper, we experimentally compare one scenario-based reading technique, namely, perspective-based reading (PBR), for defect detection in code documents with the more traditional CBR approach. The comparison was performed in a series of three studies, as a quasi experiment and two internal replications, with a total of 60 professional software developers at Bosch Telecom GmbH. Meta-analytic techniques were applied to analyze the data. Our results indicate that PBR is more effective than CBR (i.e., it resulted in inspection teams detecting more unique defects than CBR) and that the cost of defect detection using PBR is significantly lower than CBR. Therefore, this study provides evidence demonstrating the efficacy of PBR scenarios for code documents in an industrial setting.

[1] A. Aron and E. Aron, Statistics for Psychology. first ed., Prentice Hall, 1994.
[2] V. Basili, S. Green, O. Laitenberger, F. Lanubile, F. Shull, S. Sorumgard, and M. Zelkowitz, “The Empirical Investigation of Perspective-Based Reading,” Empirical Software Eng., vol. 2, no. 1, pp. 133–164, 1996.
[3] V.R. Basili, "Evolving and Packaging Reading Technologies," J. Systems and Software, vol. 38, no. 1, July 1997, pp. 3-12.
[4] D. Bisant and J. Lyle, “A Two-Person Inspection Method to Improve Programming Productivity,” IEEE Trans. Software Eng., vol. 15, no. 10, pp. 1294–1304, Oct. 1989.
[5] L. Briand, K. El Emam, T. Fussbroich, and O. Laitenberger, “Using Simulation to Build Inspection Efficiency Benchmarks for Development Projects,” Proc. 20th Int'l Conf. Software Eng., pp. 340–349, 1998.
[6] A. Brooks, J. Daly, J. Miller, M. Roper, and M. Wood, “Replication of Experimental Results in Software Engineering,” Int'l Software Eng. Research Network (ISERN), Technical Report ISERN-96-10, Univ. of Strathclyde, 1996.
[7] R. Brooks, “Studying Programmer Behavior Experimentally: The Problems of Proper Methodology,” Comm. ACM, vol. 23, no. 4, pp. 207–213, Apr. 1980.
[8] D. Campbell and J. Stanley, Experimental and Quasi-Experimental Designs for Research. Boston, Mass.: Houghton Mifflin, 1966.
[9] B. Cheng and R. Jeffery, “Comparing Inspection Strategies for Software Requirements Specifications,” Proc. 1996 Australian Software Eng. Conf., pp. 203–211, 1996.
[10] Y. Chernak, “A Statistical Approach to the Inspection Checklist Formal Synthesis and Improvement,” IEEE Trans. Software Eng., vol. 22, no. 12, pp. 866–874, Dec. 1996.
[11] D.A. Christenson, H. Steel, and A. Lamperez, “Statistical Quality Control Applied to Code Inspections,” IEEE J. Selected Areas in Comm., vol. 8, no. 2, pp. 196–200, Feb. 1990.
[12] W. Cochran, Planning and Analysis of Observational Studies. John Wiley&Sons, 1983.
[13] J. Cohen, Statistical Power Analysis for the Behavioral Sciences, second ed. Lawrence Erlbaum Associate Publishers, 1988.
[14] J. Cohen, “A New Goal: Preventing Disease, Not Infection,” Science, vol. 262, pp. 1820–1821, 1993.
[15] J. Cohen, “The Earth Is Round (p<.05),” Amer. Psychologist, vol. 49, pp. 997–1003, 1994.
[16] T. Cook and D. Campbell, Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago, Ill.: Rand McNally College Publishing Co., 1979.
[17] M. Cowles and C. Davis, “On the Origins of the .05 Level of Statistical Significance,” Am. Psychologist, vol. 37, no. 5, pp. 553–558, 1982.
[18] B. Curtis, “Measurement and Experimentation in Software Engineering,” Proc. IEEE, vol. 68, no. 9, pp. 1144–1157, Sept. 1980.
[19] B. Curtis, “By the Way, Did Anyone Study any Real Programmers?” Empirical Studies of Programmers: First Workshop, pp. 256–262, 1986.
[20] J. Daly, “Replication and a Multi-Method Approach to Software Engineering Research,” PhD thesis, Univ. of Strathclyde, 1996.
[21] H. Deitel and P. Deitel, C How to Program, second ed. Prentice Hall, 1994.
[22] E. Doolan, “Experience with Fagan's Inspection Method,” Software—Practice and Experience, vol. 22, no. 2, pp. 173–182, 1992.
[23] M. Dyer, The Cleanroom Approach to Quality Software Development.New York: John Wiley&Sons, 1992.
[24] M. Fagan, “Design and Code Inspections to Reduce Errors in Program Development,” IBM Systems J., vol. 15, no. 3, pp. 182–211, 1976.
[25] M. Fagan, “Advances in Software Inspections,” IEEE Trans. Software Eng., vol. 12, no. 7, pp. 744–751, July 1986.
[26] R. Fisher, “Combining Independent Tests of Significance,” Am. Statistician, vol. 2, no. 5, 1948.
[27] P. Fowler, “In-Process Inspections of Workproducts at AT&T,” AT&T Technical J., vol. 65, no. 2, pp. 102–112, Mar. 1986.
[28] P. Fusaro and F. Lanubile, “A Replicated Experiment to Assess Requirements Inspection Techniques,” Empirical Software Eng., vol. 2, no. 1, pp. 39–57, 1997.
[29] J. Gibbons and J. Pratt, “P-Values: Interpretation and Methodology,” The Am. Statistician, vol. 29, no. 1, pp. 20–25, 1975.
[30] T. Gilb and D. Graham, Software Inspection, Addison-Wesley, 1993.
[31] G. Glass, B. McGaw, and M.L. Smith, Meta-Analysis in Social Research. Sage Publications, 1981.
[32] M. Graden, P. Horsley, and T. Pingel, “The Effects of Software Inspections on a Major Telecommunications Project,” AT&T Technical J., vol. 65, no. 3, pp. 32–40, May/June 1986.
[33] A. Greenwald, “Within-Subjects Designs: To Use or Not to Use?” Psychological Bull., vol. 83, no. 2, Sept. 1976.
[34] J. Grizzle, “The Two-Period Chance-Over Design and Its Use in Clinical Trials,” Biometrics, vol. 21, pp. 314–320, 1965.
[35] W. Hays, Statistics. Hartcourt Brace, 1994.
[36] W. Hayes, “Research Synthesis in Software Engineering: A Case for Meta-Analysis,” Proc. Int'l Symp. Software Metrics, 1999.
[37] L. Hedges and I. Olkin, Statistical Methods for Meta-Analysis. Academic Press, 1985.
[38] D.T. Heinsman and W.R. Shadish, “Assignment Methods in Experimentation: When Do Nonrandomized Experiments Approximate Answers From Randomized Experiments?” Psychological Methods, vol. 1, no. 2, pp. 154–169, 1996.
[39] R. Henkel, Tests of Significance. Sage Publications, 1976.
[40] M. Hills and P. Armitage, “The Two-Period Cross-Over Clinical Trial,” British J. Clinical Pharmacology, vol. 8, pp. 7–20, 1979.
[41] M. Hwang, “The Use of Meta-Analysis in MIS Research: Promises and Problems,” The DATA BASE for Advances in Information Systems, vol. 27, no. 3, pp. 35–48, 1996.
[42] J. Cohen, Applied Multiple Regression/Correlation Analysis for the Behavioural Sciences. Lawrence Erlbaum Associates, 1983.
[43] P. Jalote and M. Haragopal, “Overcoming the NAH Syndrome for Inspection Deployment,” Proc. 20th Int'l Conf. Software Eng., pp. 371–378, 1998.
[44] P. Johnson and D. Tjahjono, “Does Every Inspection Really Need a Meeting?” Empirical Software Eng., vol. 3, pp. 9–35, 1998.
[45] C. Judd, E. Smith, and L. Kidder, Research Methods in Social Relations, sixth ed. Holt, Rinehart, and Winston, 1991.
[46] G. Keren, A Handbook for Data Analysis in the Behavioural Sciences—Methodological Issues, chapter 19: Between- or Within-Subjects Design: A Methodological Dilemma. Lawrence Erlbaum Associates, 1993.
[47] B. Kernighan and D. Ritchie, Programming in C. Hanser Verlag, 1990.
[48] B. Kitchenham, S. Linkman, and D. Law, “Critical Review of Quantitative Assessment,” Software Eng. J., pp. 43–53, Mar. 1994.
[49] H. Kraemer and S. Thiemann, How Many Subjects. Sage Publications, 1987.
[50] S. Kramer and R. Rosenthal, “Effect Sizes and Significance Levels in Small-Sample Research,”” Statistical Strategies for Small Sample Research, R. Hoyle, ed., 1999.
[51] S. Kusumoto, A. Chimura, T. Kikuno, K. Ichi Matsumoto, and Y. Mohri, “A Promising Approach to Two-Person Software Review in an Educational Environment,” J. Systems and Software, vol. 40, pp. 115–123, 1998.
[52] O. Laitenberger and C. Atkinson, “Generalizing Perspective-Based Inspection to Handle Object-Oriented Development Artefacts,” Proc. 21st Int'l Conf. Software Eng., 1999.
[53] O. Laitenberger and J.-M. DeBaud, “An Encompassing Life-Cycle Centric Survey of Software Inspection,” J. Systems and Software, 2000, also published as Technical Report ISERN-98-14, Fraunhofer Inst. for Experimental Software Eng.,http://www.iese.fhg.de/ISERN/pubisern_biblio_tech.html . 1998.
[54] O. Laitenberger and J.-M. DeBaud, “Perspective-Based Reading of Code Documents at Robert Bosch GmbH,” Information and Software Technology, vol. 39, pp. 781–791, Mar. 1997.
[55] L.P.W. Land, C. Saucer, and R. Jeffery, ”Validating the Defect Detection Performance Advantage of Group Designs for Software Reviews: Report of a Laboratory Experiment Using Program Code,” Proc. European Software Eng. Conf. Foundations of Software Eng., pp. 294–309, Sep. 1997.
[56] R. Linger, H. Mills, and B. Witt, Structured Programming: Theory and Practice. Addison-Wesley Publishing Co., 1979.
[57] M. Lipsey, Design Sensitivity. Sage Publications, 1990.
[58] T. McCabe, “A Complexity Measure,” IEEE Trans. Software Eng., vol. 2, no. 4, pp. 308–320, Dec. 1976.
[59] J. McCall, “Quality Factors,” Encyclopedia of Software Eng., J. Marciniak, ed., vol. 2, pp. 958–969, John Wiley and Sons, 1994.
[60] J. Miller, M. Wood, and M. Roper, “Further Experiences with Scenarios and Checklists,” Empirical Software Eng., vol. 3, no. 1, pp. 37–64, 1998.
[61] J. Miller, “Applying Meta-Analytical Procedures to Software Engineering Experiments,” J. Systems and Software, vol. 54, no. 1, pp. 29–39 2000.
[62] G. Myers, “A Controlled Experiment in Program Testing and Code Walkthroughs/Inspections,” Comm. ACM, vol. 21, no. 9, pp. 760–768, Sept. 1978.
[63] National Aeronautics and Space Administration, “Software Formal Inspection Guidebook,” Technical Report NASA-GB-A302, National Aeronautics and Space Administration, Aug. 1993, http://satc.gsfc.nasa.gov/fifipage.html.
[64] Panel on Statistical Methods in Software Eng., http://www.nap.edu/readingroom/booksstatsoft / 1993.
[65] D. Parnas and D. Weiss, “Active Design Reviews: Principles and Practice,” J. Systems and Software, vol. 7, pp. 259–265, 1987.
[66] A. Porter, H. Siy, C. Toman, and L. Votta, “An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development,” IEEE Trans. Software Eng., vol. 23, no. 6, pp. 329–346, June 1997.
[67] A.A. Porter, L.G. Votta, and V.R. Basili, “Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment,” IEEE Trans. Software Eng., vol. 21, no. 6, pp. 563-575, June 1995.
[68] A. Porter and L. Votta, “Comparing Detection Methods for Software Requirements Inspections: A Replication Using Professional Subjects,” Empirical Software Eng., vol. 3, pp. 355–379, 1998.
[69] R. Lindsay and A. Ehrenberg, “The Design of Replicated Studies,” The American Statistician, vol. 47, no. 3, pp. 217–228, 1993.
[70] B. Regnell, P. Runeson, and T. Thelin, “Are the Perspectives Really Different? Further Experimentation on Scenario-Based Reading of Requirements,” Technical Report LUTEDEX(TETS-7172)/1-40/1999, Dept. of Comm. Systems, Lund Univ., 1999.
[71] S. Rifkin and L. Deimel, “Applying Program Comprehension Techniques to Improve Inspection,” Proc. 19th Ann. NASA Software Eng. Workshop, 1994.
[72] R. Rosenthal, Meta-Analytic Procedures For Social Research. Sage Publications, 1984.
[73] R. Rosenthal and R. Rosnow, Essentials of Behavioural Research: Methods and Data Analysis, Series in Psychology, McGraw Hill, 1991.
[74] R. Rosnow and R. Rosenthal, Beginning Behavioural Research: A Conceptual Primer. Prentice Hall Int'l Eds., 1996.
[75] K. Sandahl, O. Blomkvist, J. Karlsson, C. Krysander, M. Lindvall, and N. Ohlsson, “An Extended Replication of an Experiment for Assessing Methods for Software Requirements Inspections,” Empirical Software Eng., vol. 3 pp. 327–254, 1998.
[76] F. Schmidt, “What Do Data Really Mean? Research Findings, Meta-Analysis, and Cumulative Knowledge in Psychology,” Am. Psychologist, vol. 47, pp. 1173–1181, 1992.
[77] D. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures. CRC Press, 1997.
[78] S. Siegel and J. Castellan, Nonparametric Statistics For The Behavioural Sciences, second ed. McGraw Hill, 1988.
[79] E. Simpson, “The Interpretation of Interaction in Contingency Tables,” J. Royal Statistical Soc., vol. B13, pp. 238–241, 1951.
[80] P. Spector, Research Designs. Sage Publications, 1995.
[81] S. Shapiro and M. Wilk, “A Comparative Study of Various Tests of Normality,” J. Am. Statistical Assoc., vol. 63, pp. 1343–1372, 1968.
[82] M. Slatker, Y.B. Wu, and N.S. Suzuki-Slatker, “Statistical Nonsense At the .00000 Level,” Nursing Research, vol. 40, no. 4, pp. 248–249, 1991.
[83] V. Basili, F. Shull, and F. Lanubile, “Using Experiments to Build a Body of Knowledge,” Technical Report, CS-TR-3983, Univ. of Maryland, 1998.
[84] T. van Dijk and W. Kintsch, Strategies of Discourse Comprehension. Orlando, Fla.: Academic Press, 1984.
[85] L.G. Votta, "Does Every Inspection Need a Meeting?" ACM Software Eng. Notes, vol. 18, no. 5, Dec. 1993, pp. 107-114.
[86] L. Votta, “Does the Modern Code Inspection Have Value?” Presentation at the NRC Seminar on Measuring Success: Empirical Studies of Software Eng., Mar. 1999, http://www.cser.ca/seminar/ESSE/slidesESSE_Votta.PDF .
[87] B. Winer, D. Brown, and K. Michels, Statistical Principles in Experimental Design, third ed. McGraw Hill Series in Psychology, 1991.
[88] F. Wolf, “Meta-Analysis: Quantitative Methods for Research Synthesis,” SAGE Univ. Paper, 1986.
[89] E. Youngs, “Human Errors in Programming,” Int'l J. Man-Machine Studies, vol. 6, pp. 361–376, 1974.
[90] G. Yule, “Notes on the Theory of Association of Attributes in Statistics,” Biometrika, vol. 2, pp. 121–134, 1903.

Index Terms:
Software inspection, perspective-based reading, quasi experiment, replication, meta-analysis.
Citation:
Oliver Laitenberger, Khaled El Emam, Thomas G. Harbich, "An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents," IEEE Transactions on Software Engineering, vol. 27, no. 5, pp. 387-421, May 2001, doi:10.1109/32.922713
Usage of this product signifies your acceptance of the Terms of Use.