The Community for Technology Leaders
RSS Icon
Issue No.04 - April (1989 vol.11)
pp: 421,422,423
The use of nonparametric error estimates may lead to biased results if the kernel covariances are estimated from the same data as are used to form the error estimate. If additional design samples are available, one may eliminate this bias by estimating the class covariances using an independent set of data. If, however, additional samples are not available, one may resort to leave-one-out type estimates of the kernel (for Parzen estimates) or metric (for nearest-neighbor estimates) for every sample being tested. The authors present an efficient algorithm for computation of these leave-one-out type estimates that requires little additional computational burden over procedures currently in use. The presentation is applicable to both Parzen and k-nearest neighbor (k-NN) type estimates. Experimental results demonstrating the efficiency of the algorithm are provided.<>
estimation theory, Bayes methods, error analysis, nearest neighbor, leave one out procedures, nonparametric error estimates, covariances, Parzen estimates, Error analysis, Kernel, Testing, Covariance matrix, Nearest neighbor searches, Upper bound, Pattern recognition, Smoothing methods, Equations
"Leave-one-out procedures for nonparametric error estimates", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.11, no. 4, pp. 421,422,423, April 1989, doi:10.1109/34.19039
40 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool