The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.11 - November (2003 vol.29)
pp: 985-995
Barbara Kitchenham , IEEE Computer Society
ABSTRACT
<p><b>Abstract</b>—The Mean Magnitude of Relative Error, <it>MMRE</it>, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. One purpose of <it>MMRE</it> is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that <it>MMRE</it> does not always select the best model. Our findings cast some doubt on the conclusions of any study of competing software prediction models that used <it>MMRE</it> as a basis of model comparison. We therefore recommend not using <it>MMRE</it> to evaluate and compare prediction models. At present, we do not have any universal replacement for <it>MMRE</it>. Meanwhile, we therefore recommend using a combination of theoretical justification of the models that are proposed together with other metrics proposed in this paper.</p>
INDEX TERMS
Mean magnitude of relative error, software metrics, simulation, regression analysis, prediction models, software cost estimation, software engineering, empirical software engineering, prediction accuracy.
CITATION
Tron Foss, Erik Stensrud, Barbara Kitchenham, Ingunn Myrtveit, "A Simulation Study of the Model Evaluation Criterion MMRE", IEEE Transactions on Software Engineering, vol.29, no. 11, pp. 985-995, November 2003, doi:10.1109/TSE.2003.1245300
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool