CSDL Home IEEE Transactions on Pattern Analysis & Machine Intelligence 2010 vol.32 Issue No.09 - September

Subscribe

Issue No.09 - September (2010 vol.32)

pp: 1610-1626

Yijun Sun , University of Florida, Gainesville

Sinisa Todorovic , Oregon State University, Corvallis

Steve Goodison , M.D. Anderson Cancer Center-Orlando, Orlando

ABSTRACT

This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm.

INDEX TERMS

Feature selection, local learning, logistical regression, \ell_1 regularization, sample complexity.

CITATION

Yijun Sun, Sinisa Todorovic, Steve Goodison, "Local-Learning-Based Feature Selection for High-Dimensional Data Analysis",

*IEEE Transactions on Pattern Analysis & Machine Intelligence*, vol.32, no. 9, pp. 1610-1626, September 2010, doi:10.1109/TPAMI.2009.190REFERENCES