Issue No. 07 - July (2006 vol. 28)
Enrique Vidal , IEEE Computer Society
In order to optimize the accuracy of the Nearest-Neighbor classification rule, a weighted distance is proposed, along with algorithms to automatically learn the corresponding weights. These weights may be specific for each class and feature, for each individual prototype, or for both. The learning algorithms are derived by (approximately) minimizing the Leaving-One-Out classification error of the given training set. The proposed approach is assessed through a series of experiments with uci/statlog corpora, as well as with a more specific task of text classification which entails very sparse data representation and huge dimensionality. In all these experiments, the proposed approach shows a uniformly good behavior, with results comparable to or better than state-of-the-art results published with the same data so far.
Weighted distances, nearest neighbor, leaving-one-out, error minimization, gradient descent.
R. Paredes and E. Vidal, "Learning Weighted Metrics to Minimize Nearest-Neighbor Classification Error," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 28, no. , pp. 1100-1110, 2006.