The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—Process studies and improvement efforts typically call for new instrumentation on the process in order to collect the data they have deemed necessary. This can be intrusive and expensive, and resistance to the extra workload often foils the study before it begins. The result is neither interesting new knowledge nor an improved process. In many organizations, however, extensive historical process and product data already exist. Can these existing data be used to empirically explore what process factors might be affecting the outcome of the process? If they can, organizations would have a cost-effective method for quantitatively, if not causally, understanding their process and its relationship to the product. We present a case study that analyzes an in-place industrial process and takes advantage of existing data sources. In doing this, we also illustrate and propose a methodology for such exploratory empirical studies. The case study makes use of several readily available repositories of process data in the industrial organization. Our results show that readily available data can be used to correlate both simple aggregate metrics and complex process metrics with defects in the product. Through the case study, we give evidence supporting the claim that exploratory empirical studies can provide significant results and benefits while being cost effective in their demands on the organization.</p>
Software process, process improvement, retrospective case study, empirical case study, process measurement, process model validation.

L. G. Votta, J. E. Cook and A. L. Wolf, "Cost-Effective Analysis of In-Place Software Processes," in IEEE Transactions on Software Engineering, vol. 24, no. , pp. 650-663, 1998.
87 ms
(Ver 3.3 (11022016))