This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
The Optimal Class Size for Object-Oriented Software
May 2002 (vol. 28 no. 5)
pp. 494-509

A growing body of literature suggests that there is an optimal size for software components. This means that components that are too small or too big will have a higher defect content (i.e., there is a U-shaped curve relating defect content to size). The U-shaped curve has become known as the Goldilocks Conjecture. Recently, a cognitive theory has been proposed to explain this phenomenon and it has been expanded to characterize object-oriented software. This conjecture has wide implications for software engineering practice. It suggests 1) that designers should deliberately strive to design classes that are of the optimal size, 2) that program decomposition is harmful, and 3) that there exists a maximum (threshold) class size that should not be exceeded to ensure fewer faults in the software. The purpose of the current paper is to evaluate this conjecture for object-oriented systems. We first demonstrate that the claims of an optimal component/class size (1) above) and of smaller components/classes having a greater defect content (2) above) are due to a mathematical artifact in the analyses performed previously. We then empirically test the threshold effect claims of this conjecture (3) above). To our knowledge, the empirical test of size threshold effects for object-oriented systems has not been performed thus far. We performed an initial study with an industrial C++ system and repeated it twice on another C++ system and on a commercial Java application. Our results provide unambiguous evidence that there is no threshold effect of class size. We obtained the same result for three systems using four different size measures. These findings suggest that there is a simple continuous relationship between class size and faults, and that, optimal class size, smaller classes are better and threshold effects conjectures have no sound theoretical nor empirical basis.

[1] J.A. Arthur, Rapid Evolutionary Development: Requirements, Prototyping and Software Creation, Wiley and Sons, New York, 1992.
[2] V.R. Basili and B.T. Perricone,“Software errors and complexity: An empirical investigation,” Comm. ACM, vol. 27, no. 1, pp. 42-52, Jan. 1984.
[3] D. Belsley, E. Kuh, and R. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. John Wiley and Sons, 1980.
[4] A. Binkley and S. Schach, “Validation of the Coupling Dependency Metric as a Predictor of Run-Time Failures and Maintenance Measures,” Proc. 20th Int'l Conf. Software Eng., 1998.
[5] J. Bowen, “Module Size: A Standard or Heuristic?,” J. Systems and Software, vol. 4, pp. 327-332, 1984.
[6] N. Breslow and N. Day, Statistical Methods in Cancer Research—vol. 1—The Analysis of Case Control Studies, International Agency for Research on Cancer, 1980.
[7] L. Briand, J. Wuest, J. Daly, and V. Porter, “Exploring the Relationships between Design Measures and Software Quality in Object-Oriented Systems,” J. Systems and Software, vol. 51, 2000.
[8] D. Card and R.L. Glass, Measuring Software Design Quality. Prentice Hall, 1990.
[9] M. Cartwright and M. Shepperd, "An Empirical Investigation of an Object-Oriented Software System," IEEE Trans. Software Eng., vol. 26, no. 8, Aug. 2000, pp. 786-796.
[10] F. Chayes, Ratio Correlation: A Manual for Students of Petrology and Geochemistry. The University of Chicago Press, 1971.
[11] S.R. Chidamber and C.F. Kemerer, "A Metrics Suite for Object Oriented Design," IEEE Trans. Software Eng., vol. 20, no. 6, pp. 476-493, 1994.
[12] T.B. Compton and C. Withrow, “Prediction and Control of ADA Software Defects,” J. Systems Software, vol. 12, pp. 199–207, 1990.
[13] S. Davey, D. Huxford, J. Liddiard, M. Powley, and A. Smith, “Metrics Collection In Code and Unit Test as Part of Continuous Quality Improvement,” Software Testing, Verification, and Reliability, vol. 3, pp. 125-148, 1993.
[14] C. Davies, J. Hyde, S. Bangdiwala, and J. Nelson, “An Example of Dependencies Among Variables in a Conditional Logistic Regression,” Modern Statistical Methods in Chronic Disease Edpidemiology, S. Moolgavkar and R. Prentice eds., John Wiley and Sons, 1986.
[15] R. Dunn and R. Ullman, “Modularity Is Not a Matter of Size,” Proc. 1979 Ann. Reliability and Maintainability Symp., 1979.
[16] N. Fenton and M. Neil, “Software Metrics: Successes, Failures, and New Directions,” J. Systems and Software, vol. 47, pp. 149-157, 1999.
[17] N.O.E. Fenton and M. Neil, “A Critique of Software Defect Prediction Models,” IEEE Trans. Software Eng., vol. 25, no. 5, pp. 675-689, Sept./Oct. 1999.
[18] N. Fenton and N. Ohlsson, Quantitative Analysis of Faults and Failures in a Complex Software System IEEE Trans. Software Eng., vol. 26, no. 8, pp. 797-814, Aug. 2000.
[19] V. French, “Establishing Software Metrics Thresholds,” Proc. Ninth Int'l Workshop Software Measurement, 1999.
[20] J. Gaffney, “Estimating the Number of Faults in Code,” IEEE Trans. Software Eng., vol. 10, pp. 459-464, 1984.
[21] L. Gordis, Epidemiology. W.B. Saunders, 1996.
[22] R. Harrison, L. Samaraweera, M. Dobie, and P. Lewis, “An Evaluation of Code Metrics for Object-Oriented Programs,” Information and Software Technology, vol. 38, pp. 443-450, 1996.
[23] L. Hatton, “Unexpected (and Sometimes Unpleasant) Lessons from Data in Real Software Systems,” Safety and Reliability of Software Based Systems, 12th Ann. CSR Workshop, 1995.
[24] L. Hatton, “Is Modularization Always a Good Idea?,” Information and Software Technology, vol. 38, pp. 719-721, 1996.
[25] L. Hatton, "Reexamining the Fault Density-Component Size Connection," IEEE Software, Mar. 1997, pp. 89-97.
[26] L. Hatton, "Software Failures—Follies and Fallacies," IEE Rev., Vol. 43, No. 2, 1997, pp. 49-54.
[27] L. Hatton, "Does OO Sync with How We Think?" IEEE Software, May/June 1998, pp. 46-54.
[28] E. Hilgard, R. Atkinson, and R. Atkinson, Introduction to Psychology, Harcourt Brace Jova novich, 1971.
[29] D. Hosmer and S. Lemeshow, Applied Logistic Regression. John Wiley and Sons, 1989.
[30] S.H. Kan, Metrics and Models in Software Quality Engineering. Addison Wesley, 1995.
[31] R. Lind and K. Vairavan, “An Experimental Investigation of Software Metrics and Their Relationship to Software Development Effort,” IEEE Trans. Software Eng., vol. 15, no. 5, May 1989.
[32] M. Lorenz and J. Kidd, Object-Oriented Software Metrics. Prentice Hall, 1994.
[33] G. Miller, “The Magical Number 7 Plus or Minus Two: Some Limits on Our Capacity for Processing Information,” Psychological Review, vol. 63, pp. 81-97, 1957.
[34] K.-H. Moller and D. Paulish, “An Empirical Investigation of Software Fault Distribution,” Proc. First Int'l Software Metrics Symp., 1993.
[35] D. O'Leary, “The Relationship Between Errors and Size in Knowledge-Based Systems,” Int'l J. Human-Computer Studies, vol. 44, pp. 171-185, 1996.
[36] D. Pergibon, “Logistic Regression Diagnostics,” The Annals of Statistics, vol. 9, pp. 705-724, 1981.
[37] J. Rosenberg, “Some Misconceptions About Lines of Code,” Proc. Fourth Int'l Software Metrics Symp., 1997.
[38] L. Rosenberg, R. Stapko, and A. Gallo, “Object-Oriented Metrics for Reliability,” IEEE Int'l Symp. Software Metrics, 1999.
[39] D. Schmidt, “Using Design Patterns to Develop Reusable Object-Oriented Communication Software,” Comm. ACM, vol. 38, pp. 65-74, 1995.
[40] D. Schmidt, “A System of Reusable Design Patterns for Communication Software,” The Theory and Practice of Object Systems, S. Berzuk ed., 1995.
[41] D. Schmidt and P. Stephenson, “Experiences Using Design Patterns to Evolve System Software Across Diverse OS Platforms,” Proc. Ninth European Conf. Object Oriented Programming, 1995.
[42] R.W. Selby and V.R. Basili, "Analyzing Error-Prone Systems Structure," IEEE Trans. Software Eng., vol. 17, no. 2, pp. 141-152, 1991.
[43] V.Y. Shen, T. Yu, S.M. Thebaut, and L.R. Paulsen, “Identifying Error-Prone Software—An Empirical Study,” IEEE Trans. Software Eng., vol 11, no. 4, pp. 317–323, Apr. 1985.
[44] S. Simon and J. Lesage, “The Impact of Collinearity Involving the Intercept Term on the Numerical Accuracy of Regression,” Computer Science in Economics and Management, vol. 1, pp. 137-152, 1988.
[45] M. Takahashi and Y. Kamayachi, “An Empirical Study Of A Model For Program Error Prediction,” Proc. Eighth Int'l Conf. Software Eng., 1985.
[46] K. Ulm, “A Statistical Method for Assessing A Threshold in Epidemiological Studies,” Statistics in Medicine, vol. 10, pp. 341-349, 1991.
[47] Y. Wax, “Collinearity Diagnosis for Relative Risk Regression Analysis: An Application to Assessment of Diet-Cancer Relationship in Epidemiological Studies,” Statistics in Medicine, vol. 11, pp. 1273-1287, 1992.
[48] N. Wilde, P. Matthews, and R. Huitt, “Maintaining Object-Oriented Software,” IEEE Software, vol. 10, no. 1, pp. 75–80, Jan. 1993.
[49] C. Withrow, “Error Density and Size in Ada Software,” IEEE Software, pp. 26-30, 1990.
[50] S. Woodfield, V. Shen, and H. Dunsmore, “A Study of Several Metrics for Programming Effort,” J. Systems and Software, vol. 2, pp. 97-103, 1981.

Index Terms:
object-oriented metrics, software quality, quality models, quality prediction, software size, optimal size
Citation:
K. El Emam, S. Benlarbi, N. Goel, W. Melo, H. Lounis, S.N. Rai, "The Optimal Class Size for Object-Oriented Software," IEEE Transactions on Software Engineering, vol. 28, no. 5, pp. 494-509, May 2002, doi:10.1109/TSE.2002.1000452
Usage of this product signifies your acceptance of the Terms of Use.