The Community for Technology Leaders
Green Image
Nearest neighbor classification is one of the simplest and most popular methods for statistical pattern recognition. A major issue in k-nearest neighbor classification is how to find an optimal value of the neighborhood parameter k. In practice, this value is generally estimated by the method of cross-validation. However, the ideal value of k in a classification problem not only depends on the entire data set, but also on the specific observation to be classified. Instead of using any single value of k, this paper studies results for a finite sequence of classifiers indexed by k. Along with the usual posterior probability estimates, a new measure, called the Bayesian measure of strength, is proposed and investigated in this paper as a measure of evidence for different classes. The results of these classifiers and their corresponding estimated misclassification probabilities are visually displayed using shaded strips. These plots provide an effective visualization of the evidence in favor of different classes when a given data point is to be classified. We also propose a simple weighted averaging technique that aggregates the results of different nearest neighbor classifiers to arrive at the final decision. Based on the analysis of several benchmark data sets, the proposed method is found to be better than using a single value of k.
Index Terms- Bayesian strength function, misclassification rates, multiscale visualization, neighborhood parameter, posterior probability, prior distribution, weighted averaging.

P. Chaudhuri, C. Murthy and A. K. Ghosh, "On Visualization and Aggregation of Nearest Neighbor Classifiers," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 27, no. , pp. 1592-1602, 2005.
87 ms
(Ver 3.3 (11022016))