The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (1991 vol.13)
pp: 285-289
ABSTRACT
<p>Small sample error rate estimators for nearest-neighbor classifiers are examined and contrasted with the same estimators for three-nearest-neighbor classifiers. The performance of the bootstrap estimators, e0 and 0.632B, is considered relative to leaving-one-out and other cross-validation estimators. Monte Carlo simulations are used to measure the performance of the error-rate estimators. The experimental results are compared to previously reported simulations for nearest-neighbor classifiers and alternative classifiers. It is shown that each of the estimators has strengths and weaknesses for varying apparent and true error-rate situations. A combined estimator that corrects the leaving-one-out estimator (by combining bootstrap and cross-validation estimators) gives strong results over a broad range of situations.</p>
INDEX TERMS
pattern recognition; k-NN classifiers; nearest-neighbor classifiers; bootstrap estimators; leaving-one-out; cross-validation estimators; Monte Carlo simulations; error-rate estimators; estimation theory; Monte Carlo methods; pattern recognition; statistics
CITATION
S.M. Weiss, "Small Sample Error Rate Estimation for k-NN Classifiers", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.13, no. 3, pp. 285-289, March 1991, doi:10.1109/34.75516
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool