This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Locally Adaptive Metric Nearest-Neighbor Classification
September 2002 (vol. 24 no. 9)
pp. 1281-1285

Abstract—Nearest-neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest-neighbor rule. We propose a locally adaptive nearest-neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoods that are highly adaptive to query locations. Neighborhoods are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities are smoother in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using both simulated and real-world data.

[1] D. Aha, “Lazy Learning,” Artificial Intelligence Rev., vol. 11, pp. 1-5, 1997.
[2] C. Atkeson, W. Moore, and S. Schaal, "Locally Weighted Learning," AI Rev., vol. 11, nos. 1-5, 1997, pp. 11-73.
[3] R.E. Bellman, Adaptive Control Processes. Princeton Univ. Press, 1961.
[4] L. Bottou and V. Vapnik, “Local Learning Algorithms,” Neural Computation, vol. 4, no. 6, pp. 888-900, 1992.
[5] L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, pp. 123-140, 1996.
[6] W.S. Cleveland and S.J. Devlin, “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting,” J. Am. Statistical Assoc., vol. 83, pp. 596-610, 1988.
[7] R.O. Duda and P.E. Hart, Pattern Classification and Scene Analysis. John Wiley&Sons, 1973.
[8] J.H. Friedman, “Flexible Metric Nearest Neighbor Classification,” technical report, Dept. of Statistics, Stanford Univ., 1994.
[9] T. Hastie and R. Tibshirani, “Discriminant Adaptive Nearest Neighbor Classification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 607-615, June 1996.
[10] D.G. Lowe, “Similarity Metric Learning for a Variable-Kernel Classifier,” Neural Computation, vol. 7, no. 1, pp. 72-85, 1995.
[11] G.J. McLachlan, Discriminant Analysis and Statistical Pattern Recognition. New York: Wiley, 1992.
[12] J.P. Myles and D.J. Hand, "The Multi-Class Metric Problem in Nearest Neighbour Discrimination Rules," Pattern Recognition, vol. 23, pp. 1,291-1,297, 1990.
[13] J.R. Quinlan, C4.5: Programs for Machine Learning,San Mateo, Calif.: Morgan Kaufman, 1992.
[14] R. Short and K. Fukanaga, "The Optimal Distance Measure for Nearest Neighbor Classification," IEEE Trans. Information Theory, vol. 27, pp. 622-627, 1981.
[15] C.J. Stone, “Nonparametric Regression and Its Applications (with discussion),” Ann. Statistics, vol. 5, p. 595, 1977.

Index Terms:
Chi-squared distance, classification, feature relevance, nearest neighbors.
Citation:
Carlotta Domeniconi, Jing Peng, Dimitrios Gunopulos, "Locally Adaptive Metric Nearest-Neighbor Classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1281-1285, Sept. 2002, doi:10.1109/TPAMI.2002.1033219
Usage of this product signifies your acceptance of the Terms of Use.