This Article 
 Bibliographic References 
 Add to: 
Validating the ISO/IEC 15504 Measure of Software Requirements Analysis Process Capability
June 2000 (vol. 26 no. 6)
pp. 541-566

Abstract—ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). This paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT Staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations.

[1] B. Baker, C. Hardyck, and L. Petrinovich, “Weak Measurements vs. Strong Statistics: An Empirical Critique of S.S. Stevens' Proscriptions on Statistics,” Educational and Psychological Measurement, vol. 26. pp. 291-309, 1966.
[2] S. Benno and D. Frailey, “Software Process Improvement in DSEG: 1989-1995,” Texas Instruments Technical J., vol. 12, no. 2, pp. 20-28, Mar.-Apr. 1995.
[3] A. Bicego, M. Khurana, and P. Kuvaja, “Bootstrap 3.0: Software Process Assessment Methodology,” Proc. Software Quality Managment, 1998.
[4] G. Bohrnstedt and T. Carter, “Robustness in Regression Analysis,” Sociological Methodology, H. Costner ed., Jossey-Bass, 1971.
[5] L. Briand, K. El Emam, and S. Morasca, “On the Application of Measurement Theory in Software Engineering,” Empirical Software Eng., An Int'l J., vol. 1, no. 1, pp. 61-88, 1996.
[6] J.G. Brodman and D.L. Johnson, "What Small Business and Small Organizations Say About the CMM," Proc. 16th International Conf. Software Engineering, IEEE Computer Soc. Press, Los Alamitos, Calif., 1994, pp. 331-340.
[7] J. Brodman and D. Johnson, “Return on Investment (ROI) from Software Process Improvement as Measured by US Industry,” Software Process: Improvement and Practice, Pilot Issue, John Wiley&Sons, 1995.
[8] K. Butler, “The Economic Benefits of Software Process Improvement,” Crosstalk, vol. 8, no. 7, pp. 14-17, July 1995.
[9] D. Card, “Understanding Process Improvement,” IEEE Software, pp. 102-103, July 1991.
[10] B.K. Clark, The Effects of Software Process Maturity on Software Development Effort, doctoral dissertation, Univ. of Southern California, Los Angeles, 1997, (current Nov. 2000).
[11] F. Coallier, J. Mayrand, and B. Lague, “Risk Management in Software Product Procurement,” Elements of Software Process Assessment and Improvement, K. El Emam and N.H. Madhavji eds., IEEE CS Press, 1999.
[12] W. Cochran, Planning and Analysis of Observational Studies. John Wiley&Sons, 1983.
[13] J. Cohen, Statistical Power Analysis for the Behavioral Sciences. Lawrence Erlbaum Assoc., 1988.
[14] L. Cronbach, “Coefficient Alpha and the Internal Structure of Tests,” Psychometrika, pp 297-334, Sept. 1951.
[15] M. David, R. Little, M. Samuhel, and R. Triest, “Imputation Models Based on the Propensity to Respond,” Proc. Business and Economics Section, Am. Statistical Assoc., pp. 168-173, 1983.
[16] C. Deephouse, D. Goldenson, M. Kellner, and T. Mukhopadhyay, “The Effects of Software Processes on Meeting Targets and Quality,” Proc. Hawaiian Int'l Conf. Systems Sciences, vol. 4, pp. 710-719, Jan. 1995.
[17] R. Dion, “Elements of a Process Improvement Program,” IEEE Software, Vol. 9, No. 4, July 1992, pp. 83-85.
[18] R. Dion, "Process Improvement and the Corporate Balance Sheet, IEEE Software, Vol. 10, No. 4, July/Aug. 1993, pp. 28-35.
[19] S. Dutta and L. van Wassenhove, “An Empirical Study of Adoption Levels of Software Management Practices within European Firms,” INSEAD Research Initiative in Software Excellence Working Paper, 1997.
[20] K. El Emam, “The Internal Consistency of the ISO/IEC 15504 Software Process Capability Scale,” Proc. Fifth Int'l Symp. Software Metrics, pp. 72-81, IEEE CS Press, 1998.
[21] K. El Emam and D.R. Goldenson, “SPICE: An Empiricist's Perspective,” Proc. Second IEEE Int'l Software Eng. Standards Symp., pp. 84-97, Aug. 1995.
[22] K. El Emam and N.H. Madhavji, “The Reliability of Measuring Organizational Maturity,” Software Process: Improvement and Practice, vol. 1, no. 1, pp. 3-25, 1995.
[23] K. El Emam, S. Quintin, and N.H. Madhavji, “User Participation in the Requirements Engineering Process: An Empirical Study,” Requirements Eng. J., vol. 1, pp. 4-26, 1996.
[24] K. El Emam, J.-N. Drouin, and W. Melo, SPICE: The Theory and Practice of Software Process Improvement and Capability Determination. K. El Emam, J.-N. Drouin, and W. Melo, eds., IEEE CS Press, 1998.
[25] K. El Emam, J.-M. Simon, S. Rousseau, and E. Jacquet, “Cost Implications of Interrater Agreement for Software Process Assssments,” Proc. Fifth Int'l Symp. Software Metrics, pp. 38-51, 1998.
[26] K. El Emam and L. Briand, “Costs and Benefits of Software Process Improvement,” Better Software Practice for Business Benefit: Principles and Experience, R. Messnarz and C. Tully, eds., IEEE CS Press, 1999.
[27] K. El Emam and D. Goldenson, “An Empirical Review of Software Process Assessments,” Advances in Computers, to appear, 2000.
[28] R. Flowe and J. Thordahl, “A Correlational Study of the SEI's Capability Maturity Model and Software Development Performance in DOD Contracts,” MSc thesis, U.S. Air Force Inst. of Tech nology, 1994.
[29] B. Ford, “An Overview of Hot-Deck Procedures,” Incomplete Data in Sample Surveys, Volume 2: Theory and Bibliographies, W. Madow, I. Olkin, and D. Rubin, eds., Academic Press, 1983.
[30] C. Franz and D. Robey, “Organizational Context, User Involvement, and the Usefulness of Information Systems,” Decision Sciences, vol. 17, pp. 329-356, 1986.
[31] P. Fusaro, K. El Emam, and B. Smith, “The Internal Consistencies of the 1987 SEI Maturity Questionnaire and the SPICE Capability Dimension,” Empirical Software Eng.: An Int'l J., vol. 3, pp. 179-201, 1997.
[32] D. Galletta and A. Lederer, “Some Cautions on the Measurement of User Information Satisfaction,” Decision Sciences, vol. 20, pp. 419-438, 1989.
[33] P. Gardner, “Scales and Statistics,” Review of Educational Research, vol. 45, no. 1, pp. 43-57, Winter 1975.
[34] D.R. Goldenson and J.D. Herbsleb, “After the Appraisal: A Systematic Survey of Process Improvement, Its Benefits, and Factors that Influence Success,” Technical Report CMU/SEI-95-TR-009, Software Engineering Institute, 1995.
[35] D. Goldenson, K. El Emam, J. Herbsleb, and C. Deephouse, “Empirical Studies of Software Process Assessment Methods,” Elements of Software Process Assessment and Improvement, K. El Emam and N.H. Madhavji, eds., IEEE CS Press, 1999.
[36] A. Gopal, T. Mukhopadhyay, and M. Krishnan, “The Role of Software Processes and Communication in Offshore Software Development,” submitted for publication, 1997.
[37] W. Harmon, “Benchmarking: The Starting Point for Process Improvement,” Proc. ESI Workshop on Benchmarking and Software Process Improvement, Apr. 1998.
[38] J. Herbsleb, A. Carleton, J. Rozum, J. Siegel, and D. Zubrow, “Benefits of CMM-Based Software Process Improvement: Initial Results,” Technical Report, CMU-SEI-94-TR-13, Software Eng. Inst., 1994.
[39] D. Hosmer and S. Lemeshow, Applied Logistic Regression. John Wiley&Sons, 1989.
[40] W.S. Humphrey, "Characterizing the Software Process," IEEE Software, Vol. 5, No. 2, March 1988, pp. 73-79.
[41] W.S. Humphrey, T.R. Snyder, and R.R. Willis, "Software Process Improvement at Hughes Aircraft," IEEE Software, Vol. 8, No. 4, July/Aug. 1991, pp. 11-23.
[42] M. Ibanez and H. Rempp, “European User Survey Analysis,” ESPITI Project Report, Feb. 1996. (available from,
[43] B. Ives, M. Olson, and J. Baroudi, “The Measurement of User Information Satisfaction,” Comm. ACM, vol. 26, no. 10, pp. 785-793, 1983.
[44] C. Jones, Assessment and Control of Software Risks, Yourdon Press, Englewood Cliffs, N.J., 1994.
[45] C. Jones, “The Pragmatics of Software Process Improvements,” Software Process Newsletter, IEEE CS Technical Council on Software Eng., no. 5, pp. 1-4, Winter 1996. (available athttp://www.seg.iit.nrc.caSPN),
[46] C. Jones, “The Economics of Software Process Improvements,” Elements of Software Process Assessment and Improvement, K. El Emam and N. H. Madhavji, eds., IEEE CS Press, 1999.
[47] F. Kerlinger, Foundations of Behavioral Research. Holt, Rinehart, and Winston, 1986.
[48] E. Kim and J. Lee, “An Exploratory Contingency Model of User Participation and MIS Use,” Information and Management, vol. 11, pp. 87-97, 1986.
[49] H. Krasner, “The Payoff for Software Process Improvement: What it is and How to Get it,” Elements of Software Process Assessment and Improvement, K. El Emam and N. H. Madhavji, eds., IEEE CS Press, 1999.
[50] M. Krishnan and M. Kellner, “Measuring Process Consistency: Implications for Reducing Software Defects,” submitted for publication. Mar. 1998.
[51] S. Labovitz, “Some Observations on Measurement and Statistics,” Social Forces, vol. 46, no. 2, pp. 151-160, Dec. 1967.
[52] S. Labovitz, “The Assignment of Numbers to Rank Order Categories,” Am. Sociological Review, vol. 35, pp. 515-524, 1970.
[53] P. Lawlis, R. Flowe, and J. Thordahl, “A Correlational Study of the CMM and Software Development Performance,” Software Process Newsletter, IEEE CS Technical Council on Software Eng., no. 7, pp. 1-5, Fall 1996. (available athttp://www.seg.iit.nrc.caSPN),
[54] L. Lebsanft, “Bootstrap: Experiences with Europe's Software Process Assessment and Improvement Method,” Software Process Newsletter, IEEE CS Technical Council on Software Eng., no. 5, pp. 6-10, Winter 1996. (available athttp://www.seg.iit.nrc.caSPN),
[55] J. Lee and S. Kim, “The Relationship between Procedural Formalization in MIS Development and MIS Success,” Information and Management, vol. 22, pp. 89-111, 1992.
[56] R. Lindsay and A. Ehrenberg, “The Design of Replicated Studies,” The Am. Statistician, vol. 47, no. 3, pp. 217-228, 1993.
[57] W. Lipke and K. Butler, “Software Process Improvement: A Success Story,” Crosstalk, vol. 5, no. 9, pp. 29-39, Sept. 1992.
[58] R. Little and D. Rubin, Statistical Analysis With Missing Data. Wiley, 1987.
[59] F. McGarry, S. Burke, and W. Decker, 'Measuring the Impacts Individual Process Maturity Attributes Have on Software Products,' 5th Int'l Symp. Software Metrics (Metrics 98), IEEE CS Press, Los Alamitos, Calif., 1998, pp. 52-60.
[60] J. McIver and E. Carmines, Unidimensional Scaling. Sage Publications, 1981.
[61] J. McKeen, T. Guimaraes, and J. Wetherbe, “The Relationship between User Participation and User Satisfaction: An Investigation of Four Contingency Factors,” MIS Quarterly, pp. 427-451, Dec. 1994.
[62] J. Nunnally and I. Bernstein, Psychometric Theory. McGraw-Hill, 1994.
[63] M. Paulk et al., "Capability Maturity Model, Version 1.1," IEEE Software, July 1993, pp. 18-27.
[64] M. Paulk and M. Konrad, “Measuring Process Capability versus Organizational Process Maturity,” Proc. Fourth Int'l Conf. Software Quality, Oct. 1994.
[65] J. Rice, Mathematical Statistics and Data Analysis. Duxbury Press, 1995.
[66] P. Rosenbaum and D. Rubin, “The Central Role of the Propensity Score in Observational Studies for Causal Effects,” Biometrika, vol. 70, no. 1, pp. 41-55, 1983.
[67] P. Rosenbaum and D. Rubin, “Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score,” The Am. Statistician, vol. 39, no. 1, pp. 33-38, 1985.
[68] R. Rosenthal, “Replication in Behavioral Research,” Replication Research in the Social Sciences, J. Neuliep, ed., Sage Publications, 1991.
[69] D. Rubin, “The Bayesian Bootstrap,” The Annals of Statistics, vol. 9, no. 1, pp. 130-134, 1981.
[70] D. Rubin, Multiple Imputation for Nonresponse in Surveys. John Wiley&Sons, 1987.
[71] D. Rubin, “An Overview of Multiple Imputation,” Proc. Survey Research Section, Am. Statistical Assoc., pp. 79-84, 1988.
[72] D. Rubin and N. Schenker, “Multiple Imputation for Interval Estimation from Simple Random Samples with Ignorable Nonresponse,” J. Am. Statistical Assoc., vol. 81, no. 394, pp. 366-374, 1986.
[73] D. Rubin and N. Schenker, “Multiple Imputation in Health Care Databases: An Overview,” Statistics in Medicine, vol. 10, pp. 585-598, 1991.
[74] H. Rubin, “Software Process Maturity: Measuring its Impact on Productivity and Quality,” Proc. Int'l Conf. Software Eng., pp. 468-476, 1993.
[75] H. Rubin, “Findings of the 1997 Worldwide Benchmark Project: Worldwide Software Engineering Performance Summary,” Meta Group, 1998.
[76] D. Rubin, H. Stern, and V. Vehovar, “Handling `Don't Know' Survey Responses: The Case of the Slovenian Plebiscite,” J. Am. Statistical Assoc., vol. 90, no. 431, pp. 822-828, 1995.
[77] I. Sande, “Hot-Deck Imputation Procedures,” Incomplete Data in Sample Surveys, Volume 3: Proc. Symp., W. Madow and I. Olkin, eds., Academic Press, 1983.
[78] J. Schaefer, Analysis of Incomplete Multivariate Data. Chapman&Hall, 1997.
[79] V. Sethi and W. King, “Construct Measurement in Information Systems Research: An Illustration in Strategic Systems,” Decision Sciences, vol. 22, pp. 455-472, 1991.
[80] S. Siegel and J. Castellan, Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill, 1988.
[81] The Capability Maturity Model: Guidelines for Improving the Software Process. Software Eng. Inst. Addison Wesley, 1995.
[82] “Software Engineering Institute C4 Software Technology Reference Guide—A Prototype.” Handbook CMU/SEI-97-HB-001, Software Eng. Inst. 1997.
[83] “Top-Level Standards Map,” Software Eng. Inst., Feb. 1998. available at ,
[84] “CMMI A Specification Version 1.1,” Software Eng. Inst., Apr. 1998. available at specsaspec1.1.html,
[85] I. Sommerville and P. Sawyer, Requirements Engineering: A Good Practice Guide, John Wiley&Sons, New York, 1998.
[86] P. Spector, “Ratings of Equal and Unequal Response Choice Intervals,” J. Social Psychology, vol. 112, pp. 115-119, 1980.
[87] “The SPIRE Handbook: Better Faster Cheaper Software Development in Small Companies,” The SPIRE Project, ESSI Project 23873, Nov. 1998.
[88] S. Stevens, “Mathematics, Measurement, and Psychophysics,” Handbook of Experimental Psychology, S. Stevens, ed., John Wiley&Sons, 1951.
[89] A. Subramanian and S. Nilakanta, “Measurement: A Blueprint for Theory-Building in MIS,” Information and Management, vol. 26, pp. 13-20, 1994.
[90] D. Treiman, W. Bielby, and M. Cheng, “Evaluating a Multiple Imputation Method for Recalibrating 1970 U.S. Census Detailed Industry Codes to the 1980 Standard,” Sociological Methodology, vol. 18, 1988.
[91] P. Velleman and L. Wilkinson, “Nominal, Ordinal, Interval, and Ratio Typologies Are Misleading,” The Am. Statistician, vol. 47, no. 1, pp. 65-72, Feb. 1993.
[92] H. Wohlwend and S. Rosenbaum, “Software Improvements in an International Company,” Proc. Int'l Conf. Software Eng., pp. 212-220, 1993.

Index Terms:
Software process assessment, software process improvement, standards, software quality, validity, predictive validity, requirements engineering process, requirements analysis process, empirical evaluation.
Khaled El Emam, Andreas Birk, "Validating the ISO/IEC 15504 Measure of Software Requirements Analysis Process Capability," IEEE Transactions on Software Engineering, vol. 26, no. 6, pp. 541-566, June 2000, doi:10.1109/32.852742
Usage of this product signifies your acceptance of the Terms of Use.