• evaluate the performance of process tasks and their results,
• gain insight into the causes of variation in process performance,
• analyze trends and pinpoint root causes of problems, and
• predict future process or project outcomes.
• make more accurate performance predictions,
• control their performance in real time,
• quickly identify the need for corrective action, and
• provide continually updated predictions about task and project outcomes.
• Process measurement. The available measures are often the wrong measures for analyzing process performance at the task or event level. Measures defined for monthly reporting are often poor measures for analyzing process tasks. Also, different predictive models might require different operational definitions from the same area of measurement. For instance, system availability might be predicted more accurately by mean time to failure than by defect density so long as the test data fairly represent the operational environment. Identifying the most effective measurement definitions for different QPM objectives would save industry much trial-and-error frustration.
• Statistical analysis of design work. Software development is nonroutine, knowledge-intense design work, from the architecture phase through coding. Process events differ in terms of actors, component difficulties, and conditions and are therefore typically affected by multiple sources of variation. Are statistical techniques borrowed from manufacturing appropriate for characterizing this behavior, or do we need different statistical techniques for characterizing the capability and sources of variation in design work? The Point/Counterpoint essays associated with this special section (pp. 48–51) debate the appropriateness of statistical process control methods drawn from manufacturing. These answers are critical for industrial practice.
• Individual and team capability. Individual and team performance is affected by multiple sources of variation, some differing among successive tasks and others differing across projects. We need ways to represent how these sources affect capability and how to develop expectations for evaluating process performance as these sources of variation change. For instance, how should our expectations of process performance change on the basis of a developer's experience or skill? How should we characterize a team's capability on the basis of its mix of experience and skill levels? How do we adjust and apply QPM methods to get the best control and prediction in a mixed capability environment?
• Aggregating measures for system prediction. The example in figure 1 appears simple until we realize that we're trying to predict a system characteristic using bundles of measures taken at the component level. For instance, we're using defect data from inspections of components to predict system quality. When and how should we aggregate these predictors from the component level to predict a system attribute?
• Validating benefits. QPM methods need stronger validation than the results offered in case studies, given that negative results are rarely published. We must test these techniques' marginal value by comparing their additional contributions to prediction and control versus the value of statistical methods that are computed only at major milestones during development. Validations of benefits in multiple industries help develop confidence in managers who have little previous experience with such methods.
Bill Curtis is senior vice president and chief scientist of CAST Software, where he focuses on predicting long-term system quality and costs from statistical characterizations of system attributes and architecture. He's also a coauthor of the CMM, People CMM, and Business Process Maturity Model. He holds a PhD from Texas Christian University and is an IEEE fellow. Contact him at firstname.lastname@example.org.
Girish V. Seshagiri is CEO of Advanced Information Services. He's a cofounder of the Watts Humphrey Software Quality Institute in Chennai, India, and of three software process improvement networks. His interests are in business-goals-driven software process improvement and defect-free software delivery on time. He received his MSc in physics from the University of Madras and his MBA in marketing from Michigan State University. Contact him at email@example.com.
Donald Reifer is president of RCI, a software firm specializing in metrics and management. His professional interests focus on developing business cases for justifying changes in software organizations. He's a senior member of the ACM and IEEE. He has an MS in operations research from the University of Southern California. Contact him at firstname.lastname@example.org.
Iraj Hirmanpour is principal of AMS, a software process improvement firm. His research interest is in model-based process improvement based on CMM, CMMI, TSP, and PSP. He holds a PhD in computer science from Florida Atlantic University. Contact him at email@example.com.
Gargi Keeni is a vice president of Tata Consultancy Services. Her research interests include process improvement, quality management systems, and business excellence. She's a senior member of the IEEE, a member of the National Association of Software and Services Companies, and a member of the IEEE Software Advisory Board. She received her PhD in physics from Tohoku University, Japan. Contact her at firstname.lastname@example.org.