The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.12 - December (2003 vol.25)
pp: 1561-1569
ABSTRACT
<p><b>Abstract</b>—The conventional wisdom in the field of statistical pattern recognition (SPR) is that the size of the finite test sample dominates the variance in the assessment of the performance of a classical or neural classifier. The present work shows that this result has only narrow applicability. In particular, when competing algorithms are compared, the finite training sample more commonly dominates this uncertainty. This general problem in SPR is analyzed using a formal structure recently developed for multivariate random-effects receiver operating characteristic (ROC) analysis. Monte Carlo trials within the general model are used to explore the detailed statistical structure of several representative problems in the subfield of computer-aided diagnosis in medicine. The scaling laws between variance of accuracy measures and number of training samples and number of test samples are investigated and found to be comparable to those discussed in the classic text of Fukunaga, but important interaction terms have been neglected by previous authors. Finally, the importance of the contribution of finite trainers to the uncertainties argues for some form of bootstrap analysis to sample that uncertainty. The leading contemporary candidate is an extension of the 0.632 bootstrap and associated error analysis, as opposed to the more commonly used cross-validation.</p>
INDEX TERMS
Pattern recognition, classifier design and evaluation, discriminant analysis, ROC analysis, components-of-variance models, bootstrap methods.
CITATION
Sergey V. Beiden, Marcus A. Maloof, Robert F. Wagner, "A General Model for Finite-Sample Effects in Training and Testing of Competing Classifiers", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.25, no. 12, pp. 1561-1569, December 2003, doi:10.1109/TPAMI.2003.1251149
27 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool