Issue No. 03 - March (1991 vol. 13)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/34.75516
<p>Small sample error rate estimators for nearest-neighbor classifiers are examined and contrasted with the same estimators for three-nearest-neighbor classifiers. The performance of the bootstrap estimators, e0 and 0.632B, is considered relative to leaving-one-out and other cross-validation estimators. Monte Carlo simulations are used to measure the performance of the error-rate estimators. The experimental results are compared to previously reported simulations for nearest-neighbor classifiers and alternative classifiers. It is shown that each of the estimators has strengths and weaknesses for varying apparent and true error-rate situations. A combined estimator that corrects the leaving-one-out estimator (by combining bootstrap and cross-validation estimators) gives strong results over a broad range of situations.</p>
pattern recognition; k-NN classifiers; nearest-neighbor classifiers; bootstrap estimators; leaving-one-out; cross-validation estimators; Monte Carlo simulations; error-rate estimators; estimation theory; Monte Carlo methods; pattern recognition; statistics
S. Weiss, "Small Sample Error Rate Estimation for k-NN Classifiers," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 13, no. , pp. 285-289, 1991.