This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Ranking of Software Engineering Measures Based on Expert Opinion
September 2003 (vol. 29 no. 9)
pp. 811-824
Ming Li, IEEE

Abstract—This research proposes a framework based on expert opinion elicitation, developed to select the software engineering measures which are the best software reliability indicators. The current research is based on the top 30 measures identified in an earlier study conducted by Lawrence Livermore National Laboratory. A set of ranking criteria and their levels were identified. The score of each measure for each ranking criterion was elicited through expert opinion and then aggregated into a single score using multiattribute utility theory. The basic aggregation scheme selected was a linear additive scheme. A comprehensive sensitivity analysis was carried out. The sensitivity analysis included: variation of the ranking criteria levels, variation of the weights, variation of the aggregation schemes. The top-ranked measures were identified. Use of these measures in each software development phase can lead to a more reliable quantitative prediction of software reliability.

[1] Toekomstonderzoek suppl. 10, pp. 6.6.2-01-6.6.4-07, 1974.
[2] IEEE Guide for the Use of IEEE Standard Dictionary of Measures to Produce Reliable Software, IEEE Std 982.2, New York: IEEE, 1988.
[3] Reactor Risk Reference Document Technical Report NUREG-1150, US Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington D.C., 1989.
[4] IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 610, New York: IEEE, 1990.
[5] PACS Design Specification, Lockheed Martin Corp., Inc., Gaithersburg, Md., July 1998.
[6] PACS Requirements Specification, Lockheed Martin Corp. Inc., Gaithersburg, Md., July 1998.
[7] PACS Source Code, Lockheed Martin Corp. Inc., Gaithersburg, Md., July 1998.
[8] PACS Test Plan, Lockheed Martin Corp. Inc., Gaithersburg, Md., 1998.
[9] TestMaster Reference Guide, Teradyne Software and System Test, Nashua, N.H., Aug. 2000.
[10] TestMaster User's Manual, Teradyne Software and Systems Test, Nashua, N.H., Oct. 2000.
[11] WinRunner TSL Reference Guide, Mercury Interactive Corp., Sunnyvale, Calif., 2001.
[12] J.S. Armstrong, Long-Range Forecasting from Crystal Ball to Computer, second ed. John Wiley, 1985.
[13] L.C. Briand, B. Freimut, and F. Vollei, Assessing the Cost-Effectiveness of Inspections by Combining Project Data and Expert Opinion Proc. 11th Int'l Symp. Software Reliability Eng., pp. 246-258, 2000.
[14] S. Chhibber, G. Apostolakis, and D. Okrent, A Taxonomy of Issues Related to the Use of Expert Judgments in Probabilistic Safety Studies Reliability Eng. and System Safety, vol. 38, pp. 27-45, 1992.
[15] S.R. Chidamber and C.F. Kemerer, "A Metrics Suite for Object Oriented Design," IEEE Trans. Software Eng., vol. 20, no. 6, pp. 476-493, 1994.
[16] R.T. Clemen and R.L. Winkler, Limits for the Prediction and Value of Information from Dependent Sources Operations Research, vol. 33, pp. 427-442, 1985.
[17] R.M. Cooke, Experts in Uncertainty, Opinion and Subjective Probability in Science. Oxford Univ. Press, 1991.
[18] T. Dyba, An Instrument for Measuring the Key Factors of Success in Software Process Improvement Empirical Software Eng., vol. 5, pp. 357-390, 2000.
[19] N.O.E. Fenton and M. Neil, “A Critique of Software Defect Prediction Models,” IEEE Trans. Software Eng., vol. 25, no. 5, pp. 675-689, Sept./Oct. 1999.
[20] N. Fenton and L. Pfleeger, Software Metrics–A Rigorous and Practical Approach, second ed. Boston, PWS-Publishing, 1997.
[21] M. Host and C. Wohlin, A Subjective Effort Estimation Experiment Information and Software Technology, vol. 39, no. 11, pp. 755-762, 1997.
[22] M. Host and C. Wohlin, An Experimental Study of Individual Subjective Effort Estimations and Combinations of the Estimates Proc. 20th Int'l Conf. Software Eng., pp. 332-339, 1998.
[23] G.L. Johnson and X. Yu, Objective Software Quality Assessment Proc. Nuclear Science Symp. (NSS), pp. 1691-1698, 1999.
[24] B. Kitchenham, S. Linkman, and D. Law, DESMET: A Methodology for Evaluating Software Engineering Methods and Tools Computing and Control Eng. J., vol. 8, no. 3, pp. 120-126, 1997.
[25] J.D. Lawrence, W.L. Persons, A. Sicherman, and G.L. Johnson, Assessment of Software Reliability Measurement Methods for Use in Probabilistic Risk Assessment Technical Report UCRL-ID-136035, Fission Energy and Systems Safety Program, Lawrence Livermore Nat'l Laboratory, Sept. 1998.
[26] M. Li and C. Smidts, Ranking Software Engineering Measures Related to Reliability Using Expert Opinion Proc. 11th Int'l Symp. Software Reliability Eng., pp. 246-258, 2000.
[27] M.R. Lyu, Handbook of Software Reliability Engineering. McGraw-Hill, 1995.
[28] M.G. Mendonça and V.R. Basili, Validation of an Approach for Improving Existing Measurement Frameworks IEEE Trans. Software Eng., vol. 26, no. 6, pp. 484-499, June 2000.
[29] K.H. Moller and D.J. Paulish, Software Metrics: A Practitioner's Guide to Improved Product Development. IEEE Press, Chapman and Hall, 1993.
[30] A. Mosleh, V.M. Bier, and G. Apostolakis, Methods for the Elicitation and Use of Expert Opinion in Risk Assessment Technical Report NUREG/CR-4962, Pickard, Lowe and Garrick, Inc., Newport Beach, Calif., Aug. 1987.
[31] J.D. Musa,A. Iannino,, and K. Okumoto,Software Reliability: Measurement, Prediction and Application.New York: McGraw-Hill, 1987.
[32] L.H. Putnam and A. Fitzsimmons, Estimating Software Costs Datamation, vol. 25, no. 11, pp. 171-178, 1979.
[33] C.V. Ramamoorthy and F.B. Bastani, Software Reliability: Status and Perspectives IEEE Trans. Software Eng., vol. 8, no. 4, pp. 354-371, 1982.
[34] T.L. Saaty, A Scaling Method for Priorities in Hierarchical Structures J. Math. Psychology, vol. 15, pp. 59-62, 1977.
[35] M. Sheppard and D. Ince, Derivation and Validation of Software Metrics.Oxford: Clarendon Press, 1993.
[36] C. Smidts and M. Li, Software Engineering Measures for Predicting Software Reliability in Safety Critical Digital Systems Technical Report, NUREG/GR-0019, Univ. of Maryland, Washington D.C., Nov. 2000.
[37] C. Smidts and M. Li, Validation of A Methodology For Assessing Software Quality Technical Report UMD-RE-2002-07, Univ. of Maryland, College Park, May 2002.
[38] A. Tversky and D. Kahneman, Judgment under Uncertainty: Heuristics and Biases Science, vol. 185, pp. 1124-1141, 1974.
[39] T.A. Wheeler, et. al, Analysis of Core Damage Frequency from Internal Events: Expert Judgment Elicitation Technical Report NUREG/CR-4550, Sandia Nat'l Laboratories, 1989.
[40] C. Wohlin, A. von Mayrhauser, M. Host, and B. Regnell, Subjective Evaluation as a Tool for Learning from Software Project Success Information and Software Technology, vol. 42, no. 14, pp. 983-992, 2000.
[41] X. Zhang and H. Pham, An Analysis of Factors Affecting Software Reliability The J. Systems and Software, vol. 50, no. 1, pp. 43-56, 2000.

Index Terms:
Expert opinion, software reliability, software engineering measure, ranking.
Citation:
Ming Li, Carol Smidts, "A Ranking of Software Engineering Measures Based on Expert Opinion," IEEE Transactions on Software Engineering, vol. 29, no. 9, pp. 811-824, Sept. 2003, doi:10.1109/TSE.2003.1232286
Usage of this product signifies your acceptance of the Terms of Use.