The Community for Technology Leaders
RSS Icon
Issue No.01 - January (2008 vol.30)
pp: 62-75
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGClust) that builds layers of subgraphs based on this matrix, and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones.
Clustering, Entropy, Graphs
Jorge M. Santos, Joaquim Marques de Sa, Luis A. Alexandre, "LEGClust—A Clustering Algorithm Based on Layered Entropic Subgraphs", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 1, pp. 62-75, January 2008, doi:10.1109/TPAMI.2007.1142
[1] A.K. Jain and R.C. Dubes, Algorithms for Clustering Data. Prentice Hall, 1988.
[2] A.K. Jain, M.N. Murty, and P.J. Flynn, “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, no. 3, pp. 264-323, 1999.
[3] A. Jain, A. Topchy, M. Law, and J. Buhmann, “Landscape of Clustering Algorithms,” Proc. 17th Int'l Conf. Pattern Recognition, vol. 1, pp. 260-263, 2004.
[4] P. Berkhin, “Survey of Clustering Data Mining Techniques,” technical report, Accrue Software, San Jose, Calif., 2002.
[5] S. Guha, R. Rastogi, and K. Shim, “CURE: An Efficient Clustering Algorithm for Large Databases,” Proc. ACM Int'l Conf. Management of Data, pp. 73-84, 1998.
[6] S. Guha, R. Rastogi, and K. Shim, “ROCK: A Robust Clustering Algorithm for Categorical Attributes,” Information Systems, vol. 25, no. 5, pp. 345-366, 2000.
[7] L. Kaufman and P. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis. John Wiley & Sons, 1990.
[8] T. Zhang, R. Ramakrishnan, and M. Livny, “BIRCH: An Efficient Clustering Method for Very Large Databases,” Proc. ACM SIGMOD Workshop Research Issues on Data Mining and Knowledge Discovery, pp. 103-114, 1996.
[9] T. Zhang, R. Ramakrishnan, and M. Livny, “BIRCH: A New Data Clustering Algorithm and Its Applications,” Data Mining and Knowledge Discovery, vol. 1, no. 2, pp. 141-182, 1997.
[10] G. Karypis, E.-H.S. Han, and V. Kumar, “Chameleon: Hierarchical Clustering Using Dynamic Modeling,” Computer, vol. 32, no. 8, pp.68-75, Aug. 1999.
[11] S.D. Kamvar, D. Klein, and C.D. Manning, “Interpreting and Extending Classical Agglomerative Clustering Algorithms Using a Model-Based Approach,” Proc. 19th Int'l Conf. Machine Learning, pp. 283-290, 2002.
[12] E.L. Johnson, A. Mehrotra, and G.L. Nemhauser, “Min-Cut Clustering,” Math. Programming, vol. 62, pp. 133-151, 1993.
[13] D. Matula, “Cluster Analysis via Graph Theoretic Techniques,” Proc. Louisiana Conf. Combinatorics, Graph Theory and Computing, R.C. Mullin, K.B. Reid, and D.P. Roselle, eds., pp. 199-212, 1970.
[14] D. Matula, “K-Components, Clusters and Slicings in Graphs,” SIAM J. Applied Math., vol. 22, no. 3, pp. 459-480, 1972.
[15] E. Hartuv, A. Schmitt, J. Lange, S. Meier-Ewert, H. Lehrachs, and R. Shamir, “An Algorithm for Clustering cDNAs for Gene Expression Analysis,” Proc. Third Ann. Int'l Conf. Research in Computational Molecular Biology, pp. 188-197, 1999.
[16] E. Hartuv and R. Shamir, “A Clustering Algorithm Based on Graph Connectivity,” Information Processing Letters, vol. 76, nos. 4-6, pp. 175-181, 2000.
[17] Z. Wu and R. Leahy, “An Optimal Graph Theoretic Approach to Data Clustering: Theory and Its Application to Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1101-1113, Nov. 1993.
[18] G. Karypis and V. Kumar, “Multilevel Algorithms for Multi-Constraint Graph Partitioning,” Technical Report 98-019, Univ. of Minnesota, Dept. of Computer Science/Army HPC Research Center, Minneapolis, May 1998.
[19] M. Fiedler, “A Property of Eigenvectors of Nonnegative Symmetric Matrices and Its Application to Graph Theory,” Czechoslovak Math. J., vol. 25, no. 100, pp. 619-633, 1975.
[20] F.R.K. Chung, Spectral Graph Theory. Am. Math. Soc., no. 92, 1997.
[21] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888-905, Aug. 2000.
[22] R. Kannan, S. Vempala, and A. Vetta, “On Clusterings: Good, Bad, and Spectral,” Proc. 41st Ann. Symp. Foundation of Computer Science, pp. 367-380, 2000.
[23] C. Ding, X. He, H. Zha, M. Gu, and H. Simon, “A Min-Max Cut Algorithm for Graph Partitioning and Data Clustering,” Proc. Int'l Conf. Data Mining, pp. 107-114, 2001.
[24] A.Y. Ng, M.I. Jordan, and Y. Weiss, “On Spectral Clustering: Analysis and an Algorithm,” Advances in Neural Information Processing Systems, vol. 14, 2001.
[25] M. Meila and J. Shi, “A Random Walks View of Spectral Segmentation,” Proc. Eighth Int'l Workshop Artificial Intelligence and Statistics, 2001.
[26] D. Verma and M. Meila, “A Comparison of Spectral Clustering Algorithms,” Technical Report UW-CSE-03-05-01, Washington Univ., 2003.
[27] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,” Proc. Second Int'l Conf. Knowledge Discovery and Data Mining, pp. 226-231, 1996.
[28] K. Fukunaga and L.D. Hostetler, “The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition,” IEEE Trans. Information Theory, vol. 21, pp. 32-40, 1975.
[29] Y. Cheng, “Mean Shift, Mode Seeking, and Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 8, pp.790-799, Aug. 1995.
[30] D. Comaniciu and P. Meer, “Mean Shift Analysis and Applications,” Proc. IEEE Int'l Conf. Computer Vision, pp. 1197-1203, 1999.
[31] D. Comaniciu and P. Meer, “Mean Shift: A Robust Approach toward Feature Space Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603-619, May 2002.
[32] B. Fischer, T. Zöller, and J.M. Buhmann, “Path Based Pairwise Data Clustering with Application to Texture Segmentation,” Proc. Int'l Workshop Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 235-250, 2001.
[33] B. Fischer and J.M. Buhmann, “Path-Based Clustering for Grouping of Smooth Curves and Texture Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 4, pp. 513-518, Apr. 2003.
[34] C. Shannon, “A Mathematical Theory of Communication,” Bell System Technical J., vol. 27, pp. 379-423 and 623-656, 1948.
[35] A. Renyi, “Some Fundamental Questions of Information Theory,” Selected Papers of Alfred Renyi, vol. 2, pp. 526-552, 1976.
[36] E. Parzen, “On the Estimation of a Probability Density Function and Mode,” Annals of Math. Statistics, vol. 33, pp. 1065-1076, 1962.
[37] D. Xu and J. Príncipe, “Training MLPS Layer-by-Layer with the Information Potential,” Proc. Int'l Joint Conf. Neural Networks, pp.1716-1720, 1999.
[38] H. Li, K. Zhang, and T. Jiang, “Minimum Entropy Clustering and Applications to Gene Expression Analysis,” Proc. IEEE Computational Systems Bioinformatics Conf., pp. 142-151, 2004.
[39] A.O. Hero, B. Ma, O.J. Michel, and J. Gorman, “Applications of Entropic Spanning Graphs,” IEEE Signal Processing Magazine, vol. 19, no. 5, pp. 85-95, 2002.
[40] C.H. Cheng, A.W. Fu, and Y. Zhang, “Entropy-Based Subspace Clustering for Mining Numerical Data,” Proc. Int'l Conf. Knowledge Discovery and Data Mining, 1999.
[41] R. Jenssen, K.E. Hild, D. Erdogmus, J. Príncipe, and T. Eltoft, “Clustering Using Renyi's Entropy,” Proc. Int'l Joint Conf. Neural Networks, pp. 523-528, 2003.
[42] E. Gokcay and J.C. Príncipe, “Information Theoretic Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 158-171, Feb. 2002.
[43] Y. Lee and S. Choi, “Minimum Entropy, K-Means, Spectral Clustering,” Proc. IEEE Int'l Joint Conf. Neural Networks, vol. 1, pp.117-122, 2004.
[44] Y. Lee and S. Choi, “Maximum Within-Cluster Association,” Pattern Recognition Letters, vol. 26, no. 10, pp. 1412-1422, July 2005.
[45] J.M. Santos and J. Marques de Sá, “Human Clustering on Bi-Dimensional Data: An Assessment,” Technical Report 1, INEB —Instituto de Engenharia Biomédica, Porto, Portugal, , Oct. 2005.
[46] B.W. Silverman, Density Estimation for Statistics and Data Analysis, vol. 26, Chapman & Hall, 1986.
[47] A.W. Bowman and A. Azzalini, Applied Smoothing Techniques for Data Analysis. Oxford Univ. Press. 1997.
[48] R. Jenssen, T. Eltoft, and J. Príncipe, “Information Theoretic Spectral Clustering,” Proc. Int'l Joint Conf. Neural Networks, pp.111-116, 2004.
[49] J.M. Santos, J. Marques de Sá, and L.A. Alexandre, “Neural Networks Trained with the EEM Algorithm: Tuning the Smoothing Parameter,” Proc. Sixth World Scientific and Eng. Academy and Soc. Int'l Conf. Neural Networks, 2005.
[50] H. Proença and L.A. Alexandre, “UBIRIS: A Noisy Iris Image Database,” Proc. Int'l Conf. Image Analysis and Processing, vol. 1, pp.970-977, 2005.
[51] “Stanford NCI60 Cancer Microarray Project,” http://genome-www.stanford.edunci60/, 2000.
[52] C. Blake, E. Keogh, and C. Merz, “UCI Repository of Machine Learning Databases,” , 1998.
[53] M. Forina and C. Armanino, “Eigenvector Projection and Simplified Non-Linear Mapping of Fatty Acid Content of Italian Olive Oils,” Annali di Chimica, vol. 72, pp. 127-155, 1981.
[54] G. Karypis, “Cluto: Software Package for Clustering High-Dimensional Datasets,” version 2.1.1, Nov. 2003.
[55] G. Karypis, Cluto: A Clustering Toolkit, Univ. of Minnesota, Dept. of Computer Science, Minneapolis, Nov. 2003.
[56] G. Sanguinetti, J. Laidler, and N.D. Lawrence, “Automatic Determination of the Number of Clusters Using Spectral Algorithms,” Proc. Int'l Workshop Machine Learning for Signal Processing, pp. 55-60, 2005.
[57] X. Xu, “DBScan,” http://ifsc.ualr.eduxwxu/, 1998.
[58] R.P. Duin, “Dutch Handwritten Numerals,”, 1998.
[59] L. Hubert and P. Arabie, “Comparing Partitions,” J. Classification, vol. 2, no. 1, pp. 193-218, 1985.
[60] M.F. Porter, “An Algorithm for Suffix Stripping,” Program, vol. 14, no. 3, pp. 130-137, 1980.
[61] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. Springer, 2001.
22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool