This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Validation of Object-Oriented Design Metrics as Quality Indicators
October 1996 (vol. 22 no. 10)
pp. 751-761

Abstract—This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in [13]. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in [30] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

[1] A.F. Brito and R. Carapuca, “Candidate Metrics for Object-Oriented Software within a Taxonomy Framework,” J. Systems Software, vol. 26, pp. 87-96, 1994.
[2] Amadeus Software Research, Getting Started With Amadeus, Amadeus Measurement System, 1994.
[3] V. Basili and D. Hutchens, "Analyzing a Syntactic Family of Complexity Metrics," IEEE Trans. Software Eng., vol. 9, no. 6, pp. 664-673, June 1982.
[4] V. Basili, R. Selby, and T.-Y. Philips, "Metric Analysis and Data Validation Across Fortran Projects," IEEE Trans. Software Eng., vol. 9, no. 6, pp. 652-663, June 1983.
[5] V. Basili, L. Briand, and W. Melo, "Measuring the Impact of Reuse on Quality and Productivity in Object-Oriented Systems," Comm. ACM, vol. 39, no. 10, 1996.
[6] J. Bieman and B.-K. Kang, "Cohesion and Reuse in an Object-Oriented System," Proc. ACM Symp. Software Reusability, SSR'95, pp. 259-262, Apr. 1995. Reprinted in ACM Software Eng. Notes, Aug. 1995.
[7] B. Boehm, Software Eng. Economics, Prentice-Hall, 1981.
[8] L.C. Briand,V.R. Basili,, and C.J. Hetmanski,“Developing interpretable models with optimized set reduction for identifying high-risk software components,” IEEE Transactions on Software Engineering, vol. 19, no. 11, pp. 1,028-1,044, Nov. 1993.
[9] L. Briand, K. El Emam, and S. Morasca, Theoretical and Empirical Validation of Software Product Measures, ISERN Technical Report 95-03, 1995.
[10] L. Briand, S. Morasca, and V. Basili, "Defining and validating high-level design metrics," CS-TR 3301, Univ. of Maryland, College Park, Md. Submitted for publication.
[11] L.C. Briand, S. Morasca, and V.R. Basili, "Property-Based Software Engineering Measurement," IEEE Trans. Software Eng., vol. 22, no. 1, pp. 68-85, Jan. 1996.
[12] I. Brooks, "Object-Oriented Metrics Collection and Evaluation with a Software Process," Proc. OOPSLA '93 Workshop Processes and Metrics for Object-Oriented Software Development,Washington, D.C., 1993.
[13] S.R. Chidamber and C.F. Kemerer, "A Metrics Suite for Object Oriented Design," IEEE Trans. Software Eng., vol. 20, no. 6, pp. 476-493, 1994.
[14] S.R. Chidamber and C.F. Kemerer, "Authors Reply," IEEE Trans. Software Eng., vol. 21, no. 3, p. 265, Mar. 1995.
[15] N.I. Churcher and M.J. Shepperd, "Comments on 'A Metrics Suite for Object-Oriented Design,'" IEEE Trans. Software Eng., vol. 21, no. 3, pp. 263-265, 1995.
[16] S.D. Conte, H. E. Dunsmore, and V. Y. Shen, Software Engineering Metrics and Models, Benjamin/Cummings, Menlo Park, Calif., 1986.
[17] J. Daly, A. Brooks, J. Miller, M. Roper, and M. Wood, "The Effect of Inheritance Depth on the Maintainability of Object-Oriented Software," Empirical Software Eng.: An Int'l J., vol. 1, no. 2, Feb. 1996.
[18] P. Devanbu, "GENOA—A Customizable, Language and Front-End Independent Code Analyzer," Proc. 14th Int'l Conf. Software Eng., May 1992.
[19] P. Devanbu, S. Karstu, W. Melo, and W. Thomas, “Analytical and Empirical Evaluation of Software Reuse,” Proc. 18th Int'l Conf. Software Eng., May 1996.
[20] R.B. Grady, Practical Software Metrics for Project Management and Process Improvement, Prentice Hall, Englewood Cliffs, N.J., 1992.
[21] W. Harrison, “Using Software Metrics to Allocate Testing Resources,” J. Management Information Systems, vol. 4, no. 4, pp. 93-105, 1988.
[22] W. Harrison, "Software Measurement: A Decision-Process Approach," Advances in Computers, vol. 39, pp. 51-105, 1994.
[23] G. Heller, J. Valett, and M. Wild, Data Collection Procedure for Software Eng. Laboratory (SEL) Database, SEL Series, SEL-92-002, 1992.
[24] M. Hitz and B. Montazeri, "Chidamber&Kemerer's Metrics Suite: A Measurement Theory Perspective," IEEE Trans. Software Eng., vol. 22, no. 4, pp. 276-270, 1996.
[25] D. Hosmer and S. Lemeshow, Applied Logistic Regression, Wiley-Interscience, 1989.
[26] N.E. Fenton, Software Metrics, A Rigorous Approach. Chapman&Hall, 1991.
[27] C.M. Judd, E.R. Smith, and L.H. Kidder, Research Methods in Social Relations, Harcourt Brace Jovanovich College Publishers, 1991.
[28] T.M. Khohgoftaar, A.S. Panday, and H.B. More, "A Neural Network Approach for Predicting Software Development Faults," Proc. Third Int'l IEEE Symp. Software Reliability Eng.,North Carolina, 1992.
[29] F. Lanubile and G. Visaggio, "Evaluating Predictive Quality Models Derived from Software Measures: Lessons Learned," to appear in the J. Software and Systems; also available as Technical Report CS-TR-3606, Univ. of Maryland, Computer Science Dept., College Park, Md., 1996.
[30] W. Li and S. Henry, "Object-Oriented Metrics that Predict Maintainability," J. Systems Software, Vol. 23, No. 2, 1993, pp. 111-122.
[31] F. McGarry, R. Pajersk, G. Page, S. Waligora, V. Basili, and M. Zelkowitz, Software Process Improvement in the NASA Software Eng. Laboratory. Carnegie Mellon Univ., Software Eng. Inst., Technical Report CMU/SEI-95-TR-22, Dec. 1994.
[32] J.C. Munson and T.M. Khoshgoftaar, "The Detection of Fault-Prone Programs," IEEE Trans. Software Eng., vol. 18, May 1992.
[33] C.L. Chang, R.A. Stachowitz, and J.B. Combs, “Validation of Nonmonotonic Knowledge-Based Systems,” Proc. IEEE Int'l Conf. Tools for Artificial Intelligence, Nov. 1990.
[34] R.W. Selby and A.A. Porter,“Learning from examples: Generation and evaluation of decision trees for software resource analysis,” IEEE Trans. Software Engineering, vol. 14, no. 12, pp. 1,743-1,757, Dec. 1988.
[35] N. Schneidewind, "Methodology for Validating Software Metrics," IEEE Trans. Software Eng., vol. 18, pp. 410-421, May 1992.
[36] B. Stroustrup,The C++ Programming Language. Reading MA: Addison-Wesley, 1991, 2nd ed.
[37] D.A. Young, Object-Oriented Programming with C++ and OSF/ MOTIF, Prentice Hall, 1992.

Index Terms:
Object-oriented design metrics, error prediction model, object-oriented software development, C++ programming language.
Citation:
Victor R. Basili, Lionel C. Briand, Walcélio L. Melo, "A Validation of Object-Oriented Design Metrics as Quality Indicators," IEEE Transactions on Software Engineering, vol. 22, no. 10, pp. 751-761, Oct. 1996, doi:10.1109/32.544352
Usage of this product signifies your acceptance of the Terms of Use.