The Community for Technology Leaders
RSS Icon
Issue No.10 - October (2011 vol.33)
pp: 2093-2103
Ting Yang , Dept. of Appl. Math. & Stat., Johns Hopkins Univ., Baltimore, MD, USA
Semi-supervised classification-training both on labeled and unlabeled observations-can yield improved performance compared to the classifier based on only the labeled observations. Unlabeled observations are always beneficial to classification if the model we assume is correct. However, they may degrade the classifier performance when the model is misspecified. In the classical classification problem setting, many factors affect the semi-supervised performance, including training data, model specification, estimation method, and the classifier itself. For concreteness, we consider maximum likelihood estimation in finite mixture models and the Bayes plug-in classifier, due to their ubiquitousness and tractability. In this specific setting, we examine the effect of model misspecification on semi-supervised classification performance and shed some light on when and why performance degradation occurs.
pattern classification, Bayes methods, learning (artificial intelligence), maximum likelihood estimation, performance degradation, unlabeled observations, classifier performance, training data, model specification, estimation method, maximum likelihood estimation, finite mixture model, Bayes plug-in classifier, model misspecification, semisupervised classification performance, Maximum likelihood estimation, Error analysis, Degradation, Biological system modeling, Parametric statistics, Estimation error, Bayes plug-in classifier., Semi-supervised classification, finite mixture model
Ting Yang, "The Effect of Model Misspecification on Semi-Supervised Classification", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.33, no. 10, pp. 2093-2103, October 2011, doi:10.1109/TPAMI.2011.45
[1] J. Ratsaby and S. Venkatesh, "Learning from a Mixture of Labeled and Unlabeled Examples with Parametric Side Information," Proc. Eighth Ann. Conf. Computational Learning Theory, p. 417, 1995.
[2] V. Castelli and T.M. Cover, "On the Exponential Value of Labeled Samples," Pattern Recognition Letters, vol. 16, pp. 105-111, 1995.
[3] V. Castelli and T. Cover, "The Relative Value of Labeled and Unlabeled Samples in Pattern Recognition with an Unknown Mixing Parameter," IEEE Trans. Information Theory, vol. 42, no. 6, pp. 2102-2117, arnumber=556600 , Nov. 1996.
[4] T. Zhang and F.J. Oles, "A Probability Analysis on the Value of Unlabeled Data for Classification Problems," Proc. 17th Int'l Conf. Machine Learning, pp. 1191-1198, 2000.
[5] F. Cozman and I. Cohen, "Semi-Supervised Learning of Classifiers: Theory, Algorithms, and Their Application to Human-Computer Interaction," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 12, pp. 1553-1567, Dec. 2004.
[6] F. Cozman and I. Cohen, "Risks of Semi-Supervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers," Semi-Supervised Learning, vol. 4, pp. 57-72, 2006.
[7] H. White, "Maximum Likelihood Estimation of Misspecified Models," Econometrica, vol. 50, no. 1, pp. 1-25, 1982.
[8] D.M. Titterington, A.F.M. Smith, and U.E. Makov, Statistical Analysis of Finite Mixture Distributions. John Wiley and Sons, 1987.
[9] L.F. James, C.E. Priebe, and D.J. Marchette, "Consistent Estimation of Mixture Complexity," The Annals of Statistics, vol. 29, no. 5, pp. 1281-1296, 2001.
[10] R. Redner and H. Walker, "Mixture Densities, Maximum Likelihood and the EM Algorithm," SIAM Rev., vol. 26, no. 2, pp. 195-239, 1984.
[11] L. Devroye, L. Gyorfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, first ed. Springer-Verlag, 1996.
[12] P.D. Grünwald, The Minimum Description Length Principle, first ed. The MIT Press, 2007.
[13] R Development Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, http:/, 2009.
4 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool