The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.10 - October (2005 vol.27)
pp: 1592-1602
ABSTRACT
Nearest neighbor classification is one of the simplest and most popular methods for statistical pattern recognition. A major issue in k-nearest neighbor classification is how to find an optimal value of the neighborhood parameter k. In practice, this value is generally estimated by the method of cross-validation. However, the ideal value of k in a classification problem not only depends on the entire data set, but also on the specific observation to be classified. Instead of using any single value of k, this paper studies results for a finite sequence of classifiers indexed by k. Along with the usual posterior probability estimates, a new measure, called the Bayesian measure of strength, is proposed and investigated in this paper as a measure of evidence for different classes. The results of these classifiers and their corresponding estimated misclassification probabilities are visually displayed using shaded strips. These plots provide an effective visualization of the evidence in favor of different classes when a given data point is to be classified. We also propose a simple weighted averaging technique that aggregates the results of different nearest neighbor classifiers to arrive at the final decision. Based on the analysis of several benchmark data sets, the proposed method is found to be better than using a single value of k.
INDEX TERMS
Index Terms- Bayesian strength function, misclassification rates, multiscale visualization, neighborhood parameter, posterior probability, prior distribution, weighted averaging.
CITATION
Anil K. Ghosh, Probal Chaudhuri, C.A. Murthy, "On Visualization and Aggregation of Nearest Neighbor Classifiers", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.27, no. 10, pp. 1592-1602, October 2005, doi:10.1109/TPAMI.2005.204
96 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool