This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
On Weight Design of Maximum Weighted Likelihood and an Extended EM Algorithm
October 2006 (vol. 18 no. 10)
pp. 1429-1434
The recent Maximum Weighted Likelihood (MWL) [18], [19] has provided a general learning paradigm for density-mixture model selection and learning, in which weight design, however, is a key issue. This paper will therefore explore such a design, and through which a heuristic extended Expectation-Maximization (X-EM) algorithm is presented accordingly. Unlike the EM algorithm [1], the X-EM algorithm is able to perform model selection by fading the redundant components out from a density mixture, meanwhile estimating the model parameters appropriately. The numerical simulations demonstrate the efficacy of our algorithm.

[1] A.P. Dempster, N.M. Laird, and D.B. Rubin, “Maximum Likelihood from Incomplete Data via the EM Algorithm,” J. Royal Statistical Soc., vol. 39, pp. 1-38, 1977.
[2] X.L. Meng and D.B. Rubin, “Maximum Likelihood Estimation via the ECM Algorithm: A General Framework,” Biometrika, vol. 80, no. 2, pp. 267-278, 1993.
[3] X.L. Meng and D.A. van Dyk, “The EM Algorithm— An Old Folk Song Sung to a Fast New Tune,” J. Royal Statistical Soc. B, vol. 59, pp. 511-567, 1997.
[4] N. Ueda, R. Nakano, Z. Ghahramani, and G.E. Hinton, “Smem Algorithm for Mixture Models,” Neural Computation, vol. 12, pp. 2109-2128, 2000.
[5] B. Zhang, C. Zhang, and X. Yi, “Competitive EM Algorithm for Finite Mixture Models,” Pattern Recognition, vol. 37, pp. 131-144, 2004.
[6] Y. Wu, Q. Tian, and T.S. Huang, “Discriminant-EM Algorithm with Application to Image Retrieval,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 222-227, 2000.
[7] G.J. McLachlan and K.E. Basford, Mixture Models: Inference and Applications to Clustering,. Marcel Dekker, 1988.
[8] F. Sparacino, “Sto(ry)chastics: A Bayesian Network Architecture for User Modeling and Computational Storytelling for Interactive Spaces,” Proc. Fifth Int'l Conf. Ubiquitous Computing, pp. 54-72, 2003.
[9] V. Krishnamurthy and J.B. Moore, “On-Line Estimation of Hidden Markov Model Parameters Based on the Kullback-Leibler Information Measure,” IEEE Trans. Signal Processing, vol. 41, pp. 2557-2573, Aug. 1993.
[10] I. Holmes and G.M. Rubin, “An Expectation Maximization Algorithm for Training Hidden Substitution Models,” J. Molecular Biology, vol. 317, no. 5, pp. 753-764, 2002.
[11] R. Kass and A.E. Raftery, “Bayes Factors and Model Uncertainty,” J. Am. Statistical Assoc., vol. 90, pp. 773-795, 1995.
[12] H. Akaike, “Information Theory and an Extension of the Maximum Likelihood Principle,” Proc. Second Int'l Symp. Information Theory, pp. 267-281, 1973.
[13] H. Akaike, “A New Look at the Statistical Model Identfication,” IEEE Trans. Automatic Control AC-19, pp. 716-723, 1974.
[14] G. Schwarz, “Estimating the Dimension of a Model,” The Annals of Statistics, vol. 6, no. 2, pp. 461-464, 1978.
[15] H. Bozdogan, “Model Selection and Akaike's Information Criterion: The General Theory and Its Analytical Extensions,” Psychometrika, vol. 52, no. 3, pp. 345-370, 1987.
[16] P.J. Green, “Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination,” Biometrika, vol. 82, no. 4, pp. 711-732, 1995.
[17] S. Richardson and P.J. Green, “On Bayesian Analysis of Mixtures with an Unknown Number of Components (with Discussion),” J. Royal Statistical Soc. Series B, vol. 59, pp. 731-792, 1997.
[18] Y.M. Cheung, “A Rival Penalized EM Algorithm towards Maximizing Weighted Likelihood for Density Mixture Clustering with Automatic Model Selection,” Proc. Int'l Conf. Pattern Recognition, vol. 4, pp. 633-636, 2004.
[19] Y.M. Cheung, “Maximum Weighted Likelihood via Rival Penalized EM for Density Mixture Clustering with Automatic Model Selection,” IEEE Trans. Knowledge and Data Eng., vol. 17, no. 6, pp. 750-761, June 2005.
[20] L. Xu, “Bayesian Ying-Yang Machine, Clustering, and Number of Clusters,” Pattern Recognition Letters, vol. 18, nos. 11-13, pp. 1167-1178, 1997.
[21] R.J. Cho, M.J. Campbell, E.A. Winzeler, L. Steinmetz, A. Conway, L. Wodicka, T.G. Wolfsberg, A.E. Gabrielian, D. Landsman, D.J. Lockhart, and R.W. Davis, “A Genome-Wide Transcriptional Analysis of the Mitotic Cell Cycle,” Molecular Cell, vol. 2, pp. 65-73, 1998.
[22] K.Y. Yeung, C. Fraley, A. Murua, A.E. Raftery, and W.L. Ruzzo, “Model-Based Clustering and Data Transformations for Gene Expression Data,” Bioinformatics, vol. 17, pp. 977-987, 2001.
[23] Y. Qu and S. Xu, “Supervised Cluster Analysis for Microarray Data Based on Multivariate Gaussian Mixture,” Bioinformatics, vol. 20, pp. 1905-1913, 2004.
[24] M.P.S. Brown, W.N. Grundy, D. Lin, N. Cristianini, C.W. Sugnet, T.S. Furey, M. Ares, and D. Haussler, “Knowledge-Based Analysis of MicroArray Gene Expression Data by Using Support Vector Machines,” Proc. Nat'l Academy of Sciences of the USA, vol. 97, pp. 262-267, 2000.

Index Terms:
Maximum weighted likelihood, weight design, extended expectation-maximization algorithm, model selection.
Citation:
Zhenyue Zhang, Yiu-ming Cheung, "On Weight Design of Maximum Weighted Likelihood and an Extended EM Algorithm," IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 10, pp. 1429-1434, Oct. 2006, doi:10.1109/TKDE.2006.163
Usage of this product signifies your acceptance of the Terms of Use.