CSDL Home IEEE Transactions on Pattern Analysis & Machine Intelligence 2004 vol.26 Issue No.09 - September
Issue No.09 - September (2004 vol.26)
Martin H.C. Law , IEEE
M?rio A.T. Figueiredo , IEEE
Anil K. Jain , IEEE
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2004.71
Clustering is a common unsupervised learning technique used to discover group structure in a set of data. While there exist many algorithms for clustering, the important issue of feature selection, that is, what attributes of the data should be used by the clustering algorithms, is rarely touched upon. Feature selection for clustering is difficult because, unlike in supervised learning, there are no class labels for the data and, thus, no obvious criteria to guide the search. Another important problem in clustering is the determination of the number of clusters, which clearly impacts and is influenced by the feature selection issue. In this paper, we propose the concept of feature saliency and introduce an expectation-maximization (EM) algorithm to estimate it, in the context of mixture-based clustering. Due to the introduction of a minimum message length model selection criterion, the saliency of irrelevant features is driven toward zero, which corresponds to performing feature selection. The criterion and algorithm are then extended to simultaneously estimate the feature saliencies and the number of clusters.
Feature selection, clustering, unsupervised learning, mixture models, minimum message length, EM algorithm.
Martin H.C. Law, M?rio A.T. Figueiredo, Anil K. Jain, "Simultaneous Feature Selection and Clustering Using Mixture Models", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.26, no. 9, pp. 1154-1166, September 2004, doi:10.1109/TPAMI.2004.71