Issue No.05 - Sept.-Oct. (2013 vol.30)
pp: 54-61
A. T. Misirli , Univ. of Oulu, Oulu, Finland
B. Caglayan , Bogazici Univ., Istanbul, Turkey
A. Bener , Ryerson Univ., Toronto, Canada
B. Turhan , Univ. of Oulu, Oulu, Finland
Software analytics guide practitioners in decision making throughout the software development process. In this context, prediction models help managers efficiently organize their resources and identify problems by analyzing patterns on existing project data in an intelligent and meaningful manner. Over the past decade, the authors have worked with software organizations to build metric repositories and predictive models that address process-, product-, and people-related issues in practice. This article shares their experience over the years, reflecting the expectations and outcomes both from practitioner and researcher viewpoints.
Software analytics, Decision making, Predictive models, Estimation, Software development,effort estimation, Software analytics, Decision making, Predictive models, Estimation, Software development, interviews, software analytics, defect prediction
A. T. Misirli, B. Caglayan, A. Bener, B. Turhan, "A Retrospective Study of Software Analytics Projects: In-Depth Interviews with Practitioners", IEEE Software, vol.30, no. 5, pp. 54-61, Sept.-Oct. 2013, doi:10.1109/MS.2013.93
1. Y. Dang et al., “XIAO: Tuning Code Clones at Hands of Engineers in Practice,” Proc. 28th Ann. Computer Security Applications Conf. (ACSAC 12), ACM, 2012, pp. 369-378.
2. T. Menzies and F. Shull, “Quest to Convincing Evidence,” Making Software: What Really Works, and Why We Believe It, A. Oram, and G. Wilson eds., O'Reilly, 2011, pp. 3-16.
3. D. Zhang et al., “, Software Analytics as a Learning Case in Practice: Approaches and Experiences,” Proc. Int'l Workshop Machine Learning Technologies in Software Eng. (MALETS 11), ACM, 2011, pp. 55-58.
4. A. Tosun et al., “Practical Considerations in Deploying Statistical Methods for Defect Prediction: A Case Study within the Turkish Telecommunications Industry,” Information and Software Technology, vol. 52, no. 11, 2011, pp. 1242-1257.
5. E. Kocaguneli et al., “AI-Based Models for Software Effort Estimation,” Proc. 36th EUROMICRO Conf. Software Eng. and Advanced Applications (SEAA 10), IEEE CS, 2010, pp. 323-326.
6. B. Caglayan et al., “Usage of Multiple Prediction Models Based on Defect Categories,” Proc. 6th Int'l Conf. Predictive Models in Software Eng. (PROMISE 10), ACM, 2010, article 8.
7. C. Boyce and P. Neale, Conducting In-Depth Interviews: A Guide for Designing and Conducting In-Depth Interviews for Evaluation Input: Monitoring and Evaluation—2, Pathfinder Int'l Tool Series, Pathfinder, 2006.
8. R. Czaja and J. Blair, Designing Surveys: A Guide to Decisions and Procedures, Sage, 2005.
9. A.T. Misirli et al., “AI-Based Software Defect Predictors: Applications and Benefits in a Case Study,” AI, vol. 32, no. 2, 2011, pp. 57-68.
10. J. Czerwonka et al., CRANE: Failure Prediction, Change Analysis and Test Prioritization in Practice—Experiences from Windows,” Proc. 4th Int'l Conf. Software Testing, Verification and Validation (ICST 11), IEEE CS, 2011, pp. 357-366.
11. B. Caglayan et al., “Dione: An Integrated Measurement and Defect Prediction Solution,” Proc. ACM SIGSOFT 20th Int'l Symp. Foundations of Software Eng. (FSE 12), ACM, 2012, article 20.