The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.09 - September (2005 vol.27)
pp: 1496-1500
ABSTRACT
In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.
INDEX TERMS
Index Terms- Classification, support vector machine, high-dimensional, feature subset selection, mathematical programming.
CITATION
Cor J. Veenman, David M.J. Tax, "LESS: A Model-Based Classifier for Sparse Subspaces", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.27, no. 9, pp. 1496-1500, September 2005, doi:10.1109/TPAMI.2005.182
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool