
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Ana L.N. Fred, Anil K. Jain, "Combining Multiple Clusterings Using Evidence Accumulation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 835850, June, 2005.  
BibTex  x  
@article{ 10.1109/TPAMI.2005.113, author = {Ana L.N. Fred and Anil K. Jain}, title = {Combining Multiple Clusterings Using Evidence Accumulation}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {27}, number = {6}, issn = {01628828}, year = {2005}, pages = {835850}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2005.113}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Combining Multiple Clusterings Using Evidence Accumulation IS  6 SN  01628828 SP835 EP850 EPD  835850 A1  Ana L.N. Fred, A1  Anil K. Jain, PY  2005 KW  Cluster analysis KW  combining clustering partitions KW  cluster fusion KW  evidence accumulation KW  robust clustering KW  Kmeans algorithm KW  singlelink method KW  cluster validity KW  mutual information. VL  27 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] D. Fasulo, “An Analysis of Recent Work on Clustering,” technical report, Univ. of Washington, Seatle, http://www.cs.washington.edu/homes/dfasulo clustering.ps, http://citeseer.nj.nec.comfasulo99analysi.html , 1999.
[2] D. Judd, P. Mckinley, and A.K. Jain, “LargeScale Parallel Data Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 2, pp. 153158, Feb. 1997.
[3] S.K. Bhatia and J.S. Deogun, “Conceptual Clustering in Information Retrieval,” IEEE Trans. Systems, Man, and Cybernetics, vol. 28, no. 3, pp. 427536, 1998.
[4] C. Carpineto and G. Romano, “A Lattice Conceptual Clustering System and Its Application to Browsing Retrieval,” Machine Learning, vol. 24, no. 2, pp. 95122, 1996.
[5] E.J. Pauwels and G. Frederix, “Finding Regions of Interest for ContentExtraction,” Proc. IS&T/SPIE Conf. Storage and Retrieval for Image and Video Databases VII, vol. 3656, pp. 501510, Jan. 1999.
[6] H. Frigui and R. Krishnapuram, “A Robust Competitive Clustering Algorithm with Applications in Computer Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 450466, May 1999.
[7] A.K. Jain, M.N. Murty, and P.J. Flynn, “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, no. 3, pp. 264323, Sept. 1999.
[8] R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, second ed. Wiley, 2001.
[9] L. Kaufman and P.J. Rosseeuw, Finding Groups in Data: An Introduction to Cluster Analysis. John Wiley & Sons, Inc., 1990.
[10] B. Everitt, Cluster Analysis. John Wiley and Sons, 1993.
[11] S. Theodoridis and K. Koutroumbas, Pattern Recognition. Academic Press, 1999.
[12] A.K. Jain and J.V. Moreau, “Bootstrap Technique in Cluster Analysis,” Pattern Recognition, vol. 20, no. 5, pp. 547568, 1987.
[13] R. Kothari and D. Pitts, “On Finding the Number of Clusters,” Pattern Recognition Letters, vol. 20, pp. 405416, 1999.
[14] J. Buhmann and M. Held, “Unsupervised Learning without Overfitting: Empirical Risk Approximation as an Induction Principle for Reliable Clustering,” Proc. Int'l Conf. Advances in Pattern Recognition, S. Singh, ed., pp. 167176, 1999.
[15] D. Stanford and A.E. Raftery, “Principal Curve Clustering with Noise,” technical Report, Univ. of Washington, http://www. stat.washington.eduraftery, 1997.
[16] Y. Man and I. Gath, “Detection and Separation of RingShaped Clusters Using Fuzzy Clusters,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, no. 8, pp. 855861, Aug. 1994.
[17] R. Dubes and A.K. Jain, “Validity Studies in Clustering Methodologies,” Pattern Recognition, vol. 11, pp. 235254, 1979.
[18] T.A. Bailey and R. Dubes, “Cluster Validity Profiles,” Pattern Recognition, vol. 15, no. 2, pp. 6183, 1982.
[19] M. HarEven and V.L. Brailovsky, “Probabilistic Validation Approach for Clustering,” Pattern Recognition, vol. 16, pp. 11891196, 1995.
[20] N.R. Pal and J.C. Bezdek, “On Cluster Validity for the Fuzzy CMeans Model,” IEEE Trans. Fuzzy Systems, vol. 3, pp. 370379, 1995.
[21] A. Fred and J. Leitão, “Clustering under a Hypothesis of Smooth Dissimilarity Increments,” Proc. 15th Int'l Conf. Pattern Recognition, vol. 2, pp. 190194, 2000.
[22] A. Fred, “Clustering Based on Dissimilarity First Derivatives,” Proc. Second Int'l Workshop Pattern Recognition in Information Systems, J. Inesta and L. Micó, eds. pp. 257266, 2002.
[23] G. McLachlan and K. Basford, Mixture Models: Inference and Application to Clustering. New York: Marcel Dekker, 1988.
[24] S. Roberts, D. Husmeier, I. Rezek, and W. Penny, “Bayesian Approaches to Gaussian Mixture Modeling,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 11331142, Nov. 1998.
[25] M. Figueiredo and A.K. Jain, “Unsupervised Learning of Finite Mixture Models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 3, pp. 381396, Mar. 2002.
[26] J.D. Banfield and A.E. Raftery, “ModelBased Gaussian and NonGaussian Clustering,” Biometrics, vol. 49, pp. 803821, Sept. 1993.
[27] B. Mirkin, “Concept Learning and Feature Selection Based on SquareError Clustering,” Machine Learning, vol. 35, pp. 2539, 1999.
[28] A.K. Jain and R.C. Dubes, Algorithms for Clustering Data. Prentice Hall, 1988.
[29] H. Tenmoto, M. Kudo, and M. Shimbo, “MDLBased Selection of the Number of Components in Mixture Models for Pattern Recognition,” Proc. Advances in Pattern Recognition, A. Amin, D. Dori, P. Pudil, and H. Freeman, eds., pp. 831836, 1998.
[30] H. Bischof and A. Leonardis, “Vector Quantization and Minimum Description Length,” Proc. Int'l Conf. Advances on Pattern Recognition, S. Singh, ed., pp. 355364, 1999.
[31] B. Fischer, T. Zoller, and J. Buhmann, “Path Based Pairwise Data Clustering with Application to Texture Segmentation,” Proc. Third Int'l Workshop Energy Minimization Methods in Computer Vision and Pattern Recognition, M. Figueiredo, J. Zerubia, and A.K. Jain, eds., pp. 235266, 2001.
[32] K. Fukunaga, Introduction to Statistical Pattern Recognition. New York: Academic Press, 1990.
[33] E. Gokcay and J.C. Principe, “Information Theoretic Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 158171, Feb. 2002.
[34] C. Zahn, “GraphTheoretical Methods for Detecting and Describing Gestalt Structures,” IEEE Trans. Computers, vol. 20, no. 1, pp. 6886, Jan. 1971.
[35] Y. ElSonbaty and M.A. Ismail, “OnLine Hierarchical Clustering,” Pattern Recognition Letters, pp. 12851291, 1998.
[36] M. Chavent, “A Monothetic Clustering Method,” Pattern Recognition Letters, vol. 19, pp. 989996, 1998.
[37] A. Fred and J. Leitão, “A Comparative Study of String Dissimilarity Measures in Structural Clustering,” Proc. Int'l Conf. Advances in Pattern Recognition, S. Singh, ed., pp. 385394, 1998.
[38] S. Guha, R. Rastogi, and K. Shim, “CURE: An Efficient Clustering Algorithm for Large Databases,” Proc. 1998 ACMSIGMOID Int'l Conf. Management of Data, 1998.
[39] E.W. Tyree and J.A. Long, “The Use of Linked Line Segments for Cluster Representation and Data Reduction,” Pattern Recognition Letters, vol. 20, pp. 2129, 1999.
[40] Y. Cheng, “Mean Shift, Mode Seeking, and Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, pp. 790799, 1995.
[41] D. Comaniciu and P. Meer, “Distribution Free Decomposition of Multivariate Data,” Pattern Analysis and Applications, vol. 2, pp. 2230, 1999.
[42] G. Karypis, E.H. Han, and V. Kumar, “CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling,” Computer, vol. 32, no. 8, pp. 6875, Aug. 1999.
[43] P. Bajcsy and N. Ahuja, “Location and DensityBased Hierarchical Clustering Using Similarity Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 9, pp. 10111015, Sept. 1998.
[44] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888905, Aug. 2000.
[45] A.Y. Ng, M.I. Jordan, and Y. Weiss, “On Spectral Clustering: Analysis and an Algorithm,” Advances in Neural Information Processing Systems 14, T.G. Dietterich, S. Becker, and Z. Ghahramani, eds., Cambridge, Mass.: MIT Press, 2002.
[46] N. Cristianini, J. ShaweTaylor, and J. Kandola, “Spectral Kernel Methods for Clustering,” Advances in Neural Information Processing Systems 14, T.G. Dietterich, S. Becker, and Z. Ghahramani, eds., Cambridge, Mass.: MIT Press, 2002.
[47] P.Y. Yin, “Algorithms for Straight Line Fitting Using kMeans,” Pattern Recognition Letters, vol. 19, pp. 3141, 1998.
[48] C. Fraley and A.E. Raftery, “How Many Clusters? Which Clustering Method? Answers via ModelBased Cluster Analysis,” The Computer J., vol. 41, no. 8, pp. 578588, 1998.
[49] R. Dubes and A.K. Jain, “Clustering Tecnhiques: The User's Dilemma,” Pattern Recognition, vol. 8, pp. 247260, 1976.
[50] J. Kittler, M. Hatef, R.P Duin, and J. Matas, “On Combining Classifiers,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226239, Mar. 1998.
[51] T. Dietterich, “Ensemble Methods in Machine Learning,” Proc. First Int'l Workshop Multiple Classifier Systems, J. Kittler and F. Roli, eds., pp. 115, 2000.
[52] L. Lam, “Classifier Combinations: Implementations and Theoretical Issues,” Proc. First Int'l Workshop Multiple Classifier Systems, J. Kittler and F. Roli, eds., pp. 7886, 2000.
[53] A. Fred, “Finding Consistent Clusters in Data Partitions,” Proc. Second Int'l Workshop Multiple Classifier Systems, J. Kittler and F. Roli, eds., pp. 309318, 2001.
[54] A. Fred and A.K. Jain, “Data Clustering Using Evidence Accumulation,” Proc. 16th Int'l Conf. Pattern Recognition, pp. 276280, 2002.
[55] A. Fred and A.K. Jain, “Evidence Accumulation Clustering Based on the kMeans Algorithm,” Proc. Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR Int'l Workshops SSPR 2002 and SPR 2002, T. Caelli, et al., eds., pp. 442451, 2002.
[56] A. Strehl and J. Ghosh, “Cluster Ensembles— A Knowledge Reuse Framework for Combining Multiple Partitions,” J. Machine Learning Research, vol. 3, pp. 583617, Dec. 2002.
[57] B. KamgarParsi and L.N. Kanal, “An Improved Branch and Bound Algorithm for Computing kNearest Neighbors,” Pattern Recognition Letters, vol. I, pp. 195205, 1985.
[58] T.M. Cover and J.A. Thomas, Elements of Information Theory. Wiley, 1991.
[59] A. Raftery, K. Yeung, C. Fraley, and W. Ruzzo, “ModelBased Clustering and Data Transformation for Gene Expression Data,” Technical Report UWCSE010402, Dept. of Computer Science and Eng., Univ. of Washington, 2001.
[60] R.A. Jarvis and E.A. Patrick, “Clustering Using a Similarity Measure Based on Shared Nearest Neighbors,” IEEE Trans. Computers, vol. 22, no. 11, Nov. 1973.
[61] L. Ertoz, M. Steinbach, and V. Kumar, “A New Shared Nearest Neighbor Clustering Algorithm and Its Applications,” Proc. Workshop Clustering High Dimensional Data and Its Applications at Second SIAM Int'l Conf. Data Mining, http://wwwusers.cs.umn.edu/ kumar/papers papers.html, 2002.