The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - July-Aug. (2013 vol.30)
pp: 81-87
Yang-Ming Zhu , Philips Healthcare
David Faller , Philips Healthcare
ABSTRACT
Defect density is the ratio between the number of defects and software size. Properly assessing defect density in evolutionary product development requires a strong tool and rigid process support that enables defects to be traced to the offending source code. In addition, it requires waiting for field defects after the product is deployed. To ease the calculation in practice, a proposed method approximates the lifetime number of defects against the software by the number of defects reported in a development period even if the defects are reported against previous product releases. The method uses aggregated code churn to measure the software size. It was applied to two development projects in medical imaging that involved three geographical locations (sites) with about 30 software engineers and 1.354 million lines of code in the released products. The results suggest the approach has some merits and validity, which the authors discuss in the distributed development context. The method is simple and operable and can be used by others with situations similar to ours.
INDEX TERMS
Software d, Software metrics, Approximation methods, Software quaility, Analytical models, Software performance, Performance evaluation, distributed development, defect density, evolutionary development, code churn
CITATION
Yang-Ming Zhu, David Faller, "Defect-Density Assessment in Evolutionary Product Development: A Case Study in Medical Imaging", IEEE Software, vol.30, no. 4, pp. 81-87, July-Aug. 2013, doi:10.1109/MS.2012.111
REFERENCES
1. S.H. Kan, “Software Quality Metrics Overview,” Metrics and Models in Software Quality Engineering, 2nd ed., Addison-Wesley Professional, 2002, chapter 4.
2. A. Sureka, S. Lal, and L. Agarwal, “Applying Fellegi-Sunter (FS) Model for Traceability Link Recovery between Bug Databases and Version Archives,” Proc. IEEE 18th Asia Pacific Software Eng. Conf., IEEE, 2011, pp. 146–153.
3. G. Karner, “Resource Estimation for Objectory Projects,” master's thesis, Objective Systems SF AB (now part of IBM/Rational), 1993; http://www.bfpug.com.br/Artigos/UCPKarner-Resource Estimation for Objectory Projects.doc.
4. A.J. Albrecht, “Measuring Application Development Productivity,” Proc. Joint IBM/Share/Guide Int'l Application Development Symp., 1979; www.bfpug.com.br/Artigos/AlbrechtMeasuringApplicationDevelopmentProductivity.pdf .
5. N. Rudra and P. Rudra, Basic Statistical Physics, World Scientific, 2010, p. 6.
6. T. Fehlmann, “Defect Density Prediction with Six Sigma,” Proc. 6th Software Measurement European Forum, 2009; www.dpo.it/smef2009proceedings.pdf#page=69.
7. S.W. Haider et al., “Estimation of Defects Based on Defect Decay Model: ED3M,” IEEE Trans. Software Eng., vol. 34, no. 3, 2008, pp. 336–356.
8. U. Raja, J.E. Hale, and D.P. Hale, “Temporal Patterns of Software Evolution Defects: A Comparative Analysis of Open Source and Closed Source Projects,” J. Software Eng. and Applications, vol. 4, no. 8, 2011, pp. 497–511.
9. R. Bucholz and P.A. Laplante, “A Dynamic Capture-Recapture Model for Software Defect Prediction,” Innovations in Systems and Software Eng., vol. 5, no. 4, 2009, pp. 265–270.
10. Y. Shin et al., “Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities,” IEEE Trans. Software Eng., vol. 37, no. 6, 2011, pp. 772–787.
11. N. Nagappan and T. Ball, “Evidence-Based Failure Prediction,” Making Software: What Really Works, and Why We Believe It, O'Reilly, 2010, chapter 23.
12. C. Bird et al., “Does Distributed Development Affect Software Quality? An Empirical Case Study of Windows Vista,” Comm. ACM, vol. 52, no. 8, 2009, pp. 85–93.
13. M. Solla, A. Patel, and C. Wills, “New Metric for Measuring Programmer Productivity,” Proc. IEEE Symp. Computers and Informatics, IEEE, 2011, pp. 177–182.
378 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool