This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment
June 1995 (vol. 21 no. 6)
pp. 563-575
Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s). Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate. We evaluated this hypothesis using a 3 × 24 partial factorial, randomized experimental design. Forty eight graduate students in computer science participated in the experiment. They were assembled into sixteen, three-person teams. Each team inspected two SRS using some combination of Ad Hoc, Checklist or Scenario methods. For each inspection we performed four measurements: 1) individual fault detection rate, 2) team fault detection rate, 3) percentage of faults first identified at the collection meeting (meeting gain rate), and 4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate). The experimental results are that 1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, 2) Scenario reviewers were more effective at detecting the faults their scenarios are designed to uncover, and were no less effective at detecting other faults than both Ad Hoc or Checklist reviewers, 3) Checklist reviewers were no more effective than Ad Hoc reviewers, and 4) Collection meetings produced no net improvement in the fault detection rate—meeting gains were offset by meeting losses.

[1] W.S. Humphrey, Managing the Software Process, Addison-Wesley, Reading, Mass., 1989.
[2] IEEE Standard for Software Reviews and Audits. Software Eng. Tech. Comm. of the IEEE Computer Society, 1989. IEEE Std 1028-1988.
[3] L.G. Votta, "Does Every Inspection Need a Meeting?" ACM Software Eng. Notes, vol. 18, no. 5, Dec. 1993, pp. 107-114.
[4] D.L. Parnas and D.M. Weiss, "Active Design Reviews: Principles and Practices," Proc. Eighth Int'l Conf. Software Eng., pp. 215-222, Aug. 1985.
[5] M.E. Fagan,“Design and code inspections to reduce errors in program development,” IBM Systems J., vol. 15, no. 3, pp. 182-211, 1976.
[6] B. Boehm, Software Engineering Economics, Prentice Hall, Upper Saddle River, N.J., 1981, pp. 533-535.
[7] C.M. Judd,E.R. Smith,, and L.H. Kidder,Research Methods in Social Relations,Fort Worth, Tex.: Holt, Rinehart and Winston, Inc., 6th edition, 1991.
[8] M.A. Ardis, "Lessons from Using Basic Lotos," Sixteenth Int'l Conf. Software Eng.,Sorrento, Italy, pp. 5-14, May 1994.
[9]  S.L. Gerhart, D. Craigen, and T. Ralston, “Experience with Formal Methods in Critical Systems,” IEEE Software, Jan. 1994, pp. 21-28.
[10] S.G. Eick, C.R. Loader, M.D. Long, S.A. Vander Wiel, and L.G. Votta, "Estimating Software Fault Content Before Coding," Proc. 14th Int'l Conf. Software Eng., pp. 59-65, May 1992.
[11] G.E.P. Box,W.G. Hunter,, and J.S. Hunter,Statistics for Experimenters,New York: John Wiley&Sons, 1978
[12] R.M. Heiberger, Computation for the Analysis of Designed Experiments.New York: Wiley&Sons, 1989.
[13] K.L. Heninger,“Specifying software requirements fox complex systems: New techniques and their application,” IEEE Trans. Software Eng., vol. SE-6, no. 1, pp. 2-13, Jan. 1980.
[14] W.G. Wood,“Temporal logic case study,” Tech. Report CMU/SEI-89-TR-24, Software Eng. Inst., Pittsburgh, Penn., Aug. 1989.
[15] J. van Schouwen,“The A-7 requirements model: Re-examination for real-time systems and an application to monitoring systems,” Tech. Report TR-90-276, Queen’s Univ., Kingston, Ont., Canada, May 1990.
[16] J. Kirby,“Example NRL/SCR software requirements for an automobile cruise control and monitoring system,” Tech. Report TR-87-07, Wang Inst. of Graduate Studies, July 1984.
[17] G.M. Schnieder, J. Martin, and W.T. Tsai, "An Experimental Study of Fault Detection in User Requirements," ACM Trans. Software Eng. and Methodology, vol. 1, no. 2, pp. 188-204, Apr. 1992.
[18] V. Basili and D. Weiss, "Evaluation of the A-7 Requirements Document by Analysis of Change Data," Proc. 5th Int'l Conf. Software Eng., IEEE CS Press, Los Alamitos, Calif., Mar. 1981, pp. 314-323.
[19] IEEE Guide to Software Requirements Specifications. IEEE Std 830-1984, Software Engineering Technical Committee of the IEEE Computer Society, 1984.
[20] S. Siegel and N.J. Castellan, Jr.,Nonparametric Statistics for the Behavioral Sciences,New York: McGraw-Hill Book Co., 2nd ed., 1988.

Index Terms:
Controlled experiments, technique and methodology evaluation, inspections, reading techniques.
Citation:
Adam A. Porter, Lawrence G. Votta, Jr., Victor R. Basili, "Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment," IEEE Transactions on Software Engineering, vol. 21, no. 6, pp. 563-575, June 1995, doi:10.1109/32.391380
Usage of this product signifies your acceptance of the Terms of Use.