This Article 
 Bibliographic References 
 Add to: 
What's Wrong with Hit Ratio?
November/December 2006 (vol. 21 no. 6)
pp. 68-70
Arie Ben-David, Holon Institute of Technology
Hit ratio is currently the most common metric for measuring the accuracy of classifiers. However, it doesn't compensate for classifications that might have been due to chance. The problem's magnitude is studied here through an empirical experiment on three multivalued UCI (University of California, Irvine) classification data sets, using two well-known machine learning models: C4.5 and naive Bayes. The author shows that using hit ratio can lead to erroneous conclusions. He proposes using Cohen's kappa, as a statistically robust alternative that takes random hits into account. Like any other metric, Cohen's kappa has its own shortcomings, but the author proposes that unless a better simple alternative is found, it should be mandatory in any scientific report about classifier accuracy.
Index Terms:
Cohen's kappa , hit ratio, classification accuracy.
Arie Ben-David, "What's Wrong with Hit Ratio?," IEEE Intelligent Systems, vol. 21, no. 6, pp. 68-70, Nov.-Dec. 2006, doi:10.1109/MIS.2006.123
Usage of this product signifies your acceptance of the Terms of Use.