Issue No. 06 - Nov.-Dec. (2012 vol. 38)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TSE.2011.111
Tim Menzies , West Virginia University, Morgantown
Ekrem Kocaguneli , West Virginia University, Morgantown
Jacky W. Keung , The Hong Kong Polytechnic University, Hong Kong
Background: Despite decades of research, there is no consensus on which software effort estimation methods produce the most accurate models. Aim: Prior work has reported that, given M estimation methods, no single method consistently outperforms all others. Perhaps rather than recommending one estimation method as best, it is wiser to generate estimates from ensembles of multiple estimation methods. Method: Nine learners were combined with 10 preprocessing options to generate 9 \times 10 = 90 solo methods. These were applied to 20 datasets and evaluated using seven error measures. This identified the best n (in our case n=13) solo methods that showed stable performance across multiple datasets and error measures. The top 2, 4, 8, and 13 solo methods were then combined to generate 12 multimethods, which were then compared to the solo methods. Results: 1) The top 10 (out of 12) multimethods significantly outperformed all 90 solo methods. 2) The error rates of the multimethods were significantly less than the solo methods. 3) The ranking of the best multimethod was remarkably stable. Conclusion: While there is no best single effort estimation method, there exist best combinations of such effort estimation methods.
Costs, Software performance, Measurement uncertainty, Taxonomy, Machine learning, Regression tree analysis, Support vector machines, Neural networks, k-NN, Software cost estimation, ensemble, machine learning, regression trees, support vector machines, neural nets, analogy
Tim Menzies, Ekrem Kocaguneli, Jacky W. Keung, "On the Value of Ensemble Effort Estimation", IEEE Transactions on Software Engineering, vol. 38, no. , pp. 1403-1416, Nov.-Dec. 2012, doi:10.1109/TSE.2011.111