The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.07 - July (2006 vol.28)
pp: 1100-1110
R. Paredes , Dept. de Sistemas Informaticos y Computacion, Univ. Politecnica de Valencia
E. Vidal , Dept. de Sistemas Informaticos y Computacion, Univ. Politecnica de Valencia
ABSTRACT
In order to optimize the accuracy of the nearest-neighbor classification rule, a weighted distance is proposed, along with algorithms to automatically learn the corresponding weights. These weights may be specific for each class and feature, for each individual prototype, or for both. The learning algorithms are derived by (approximately) minimizing the leaving-one-out classification error of the given training set. The proposed approach is assessed through a series of experiments with UCI/STATLOG corpora, as well as with a more specific task of text classification which entails very sparse data representation and huge dimensionality. In all these experiments, the proposed approach shows a uniformly good behavior, with results comparable to or better than state-of-the-art results published with the same data so far
INDEX TERMS
Prototypes, Computer errors, Neural networks, Training data, Computer Society, Text categorization, Nearest neighbor searches, Pattern classification, Degradation,gradient descent., Weighted distances, nearest neighbor, leaving-one-out, error minimization
CITATION
R. Paredes, E. Vidal, "Learning weighted metrics to minimize nearest-neighbor classification error", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.28, no. 7, pp. 1100-1110, July 2006, doi:10.1109/TPAMI.2006.145
3 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool