The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - May (1987 vol.9)
pp: 628-633
Richard C. Dubes , Department of Computer Science, Michigan State University, East Lansing, MI 48824.
Chaur-Chin Chen , Department of Computer Science, Michigan State University, East Lansing, MI 48824.
ABSTRACT
The design of a pattern recognition system requires careful attention to error estimation. The error rate is the most important descriptor of a classifier's performance. The commonly used estimates of error rate are based on the holdout method, the resubstitution method, and the leave-one-out method. All suffer either from large bias or large variance and their sample distributions are not known. Bootstrapping refers to a class of procedures that resample given data by computer. It permits determining the statistical properties of an estimator when very little is known about the underlying distribution and no additional samples are available. Since its publication in the last decade, the bootstrap technique has been successfully applied to many statistical estimations and inference problems. However, it has not been exploited in the design of pattern recognition systems. We report results on the application of several bootstrap techniques in estimating the error rate of 1-NN and quadratic classifiers. Our experiments show that, in most cases, the confidence interval of a bootstrap estimator of classification error is smaller than that of the leave-one-out estimator. The error of 1-NN, quadratic, and Fisher classifiers are estimated for several real data sets.
CITATION
Richard C. Dubes, Chaur-Chin Chen, "Bootstrap Techniques for Error Estimation", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.9, no. 5, pp. 628-633, May 1987, doi:10.1109/TPAMI.1987.4767957
37 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool