The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - November/December (2006 vol.21)
pp: 68-70
Arie Ben-David , Holon Institute of Technology
ABSTRACT
Hit ratio is currently the most common metric for measuring the accuracy of classifiers. However, it doesn't compensate for classifications that might have been due to chance. The problem's magnitude is studied here through an empirical experiment on three multivalued UCI (University of California, Irvine) classification data sets, using two well-known machine learning models: C4.5 and naive Bayes. The author shows that using hit ratio can lead to erroneous conclusions. He proposes using Cohen's kappa, as a statistically robust alternative that takes random hits into account. Like any other metric, Cohen's kappa has its own shortcomings, but the author proposes that unless a better simple alternative is found, it should be mandatory in any scientific report about classifier accuracy.
INDEX TERMS
Cohen's kappa, hit ratio, classification accuracy.
CITATION
Arie Ben-David, "What's Wrong with Hit Ratio?", IEEE Intelligent Systems, vol.21, no. 6, pp. 68-70, November/December 2006, doi:10.1109/MIS.2006.123
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool