
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Pavan Kumar Mallapragada, Rong Jin, Anil K. Jain, Yi Liu, "SemiBoost: Boosting for SemiSupervised Learning," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp. 20002014, November, 2009.  
BibTex  x  
@article{ 10.1109/TPAMI.2008.235, author = {Pavan Kumar Mallapragada and Rong Jin and Anil K. Jain and Yi Liu}, title = {SemiBoost: Boosting for SemiSupervised Learning}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {31}, number = {11}, issn = {01628828}, year = {2009}, pages = {20002014}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2008.235}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  SemiBoost: Boosting for SemiSupervised Learning IS  11 SN  01628828 SP2000 EP2014 EPD  20002014 A1  Pavan Kumar Mallapragada, A1  Rong Jin, A1  Anil K. Jain, A1  Yi Liu, PY  2009 KW  Machine learning KW  semisupervised learning KW  semisupervised improvement KW  manifold assumption KW  cluster assumption KW  boosting. VL  31 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] H.J. Scudder, “Probability of Error of Some Adaptive PatternRecognition Machines,” IEEE Trans. Information Theory, vol. 11, no. 3, pp. 363371, July 1965.
[2] H. Robbins and S. Monro, “A Stochastic Approximation Method,” Annals of Math. Statistics, vol. 22, pp. 400407, 1951.
[3] X. Zhu and Z. Ghahramani, “Learning from Labeled and Unlabeled Data with Label Propagation,” Technical Report CMUCALD02107, Carnegie Mellon Univ., 2002.
[4] Y. Bengio, O.B. Alleau, and N. Le Roux, “Label Propagation and Quadratic Criterion,” SemiSupervised Learning, O. Chapelle, B.Schölkopf, and A. Zien, eds., pp. 193216, MIT Press, 2006.
[5] M. Szummer and T. Jaakkola, “Partially Labeled Classification with Markov Random Walks,” Proc. Neural Information Processing Systems Conf., pp. 945952, 2001.
[6] A. Blum and S. Chawla, “Learning from Labeled and Unlabeled Data Using Graph Mincuts,” Proc. 18th Int'l Conf. Machine Learning, pp. 1926, 2001.
[7] T. Joachims, “Transductive Learning via Spectral Graph Partitioning,” Proc. 20th Int'l Conf. Machine Learning, pp. 290297, 2003.
[8] O. Chapelle and A. Zien, “SemiSupervised Classification by Low Density Separation,” Proc. 10th Int'l Workshop Artificial Intelligence and Statistics, pp. 5764, 2005.
[9] SemiSupervised Learning, O. Chapelle, B. Scholkopf, and A. Zien, eds. MIT Press, 2006.
[10] T. Joachims, “Transductive Inference for Text Classification Using Support Vector Machines,” Proc. 16th Int'l Conf. Machine Learning, pp. 200209, 1999.
[11] G. Fung and O. Mangasarian, “SemiSupervised Support Vector Machines for Unlabeled Data Classification,” Optimization Methods and Software, vol. 15, pp. 2944, 2001.
[12] M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold Regularization: A Geometric Framework for Learning from Examples,” Technical Report TR200406, Dept. of Computer Science, Univ. of Chicago, 2004.
[13] V. Vural, G. Fung, J.G. Dy, and B. Rao, “SemiSupervised Classifiers Using APriori Metric Information,” Optimization Methods and Software J., special issue on machine learning and data mining, to appear.
[14] Y. Freund and R.E. Schapire, “Experiments with a New Boosting Algorithm,” Proc. 13th Int'l Conf. Machine Learning, pp. 148156, 1996.
[15] C. Rosenberg, M. Hebert, and H. Schneiderman, “SemiSupervised SelfTraining of Object Detection Models,” Proc. Seventh Workshop Applications of Computer Vision, vol. 1, pp. 2936, Jan. 2005.
[16] K.P. Bennett, A. Demiriz, and R. Maclin, “Exploiting Unlabeled Data in Ensemble Methods,” Proc. Eighth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 289296, 2002.
[17] F. d'Alche Buc, Y. Grandvalet, and C. Ambroise, “SemiSupervised Marginboost,” Proc. Neural Information Processing Systems Conf., pp. 553560, 2002.
[18] X. Zhu, Z. Ghahramani, and J. Lafferty, “SemiSupervised Learning Using Gaussian Fields and Harmonic Functions,” Proc. 20th Int'l Conf. Machine Learning, pp. 912919, 2003.
[19] O. Chapelle and A. Zien, “SemiSupervised Classification by Low Density Separation,” Proc. 10th Int'l Workshop Artificial Intelligence and Statistics, pp. 5764, 2005.
[20] A. Blum and T. Mitchell, “Combining Labeled and Unlabeled Data with CoTraining,” Proc. Workshop Computational Learning Theory, pp. 92100, 1998.
[21] D. Miller and H. Uyar, “A Mixture of Experts Classifier with Learning Based on Both Labeled and Unlabeled Data,” Proc. Neural Information Processing Systems Conf., pp. 571577, 1996.
[22] K. Nigam, A.K. McCallum, S. Thrun, and T. Mitchell, “Text Classification from Labeled and Unlabeled Documents Using EM,” Machine Learning, vol. 39, pp. 103134, 2000.
[23] N.D. Lawrence and M.I. Jordan, “SemiSupervised Learning via Gaussian Processes,” Proc. Neural Information Processing Systems Conf., pp. 753760, 2005.
[24] K. Bennett and A. Demiriz, “SemiSupervised Support Vector Machines,” Proc. Neural Information Processing Systems Conf., pp.368374, 1998.
[25] Y. Freund and R.E. Schapire, “A DecisionTheoretic Generalization of Online Learning and an Application to Boosting,” J.Computer and System Sciences, vol. 55, pp. 119139, Aug. 1997.
[26] D. Zhou, J. Huang, and B. Scholkopf, “Learning from Labeled and Unlabeled Data on a Directed Graph,” Proc. 22nd Int'l Conf. Machine Learning, pp. 10361043, 2005.
[27] J. Friedman, T. Hastie, and R. Tibshirani, “Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting,” The Annals of Statistics, vol. 28, pp. 337374, Apr. 2000.
[28] P.K. Mallapragada, R. Jin, A.K. Jain, and Y. Liu, “Semiboost: Boosting for SemiSupervised Learning,” Technical Report MSUCSE07197, Michigan State Univ., 2007.
[29] L. Mason, J. Baxter, P. Bartlett, and M. Frean, “Boosting Algorithms as Gradient Descent in Function Space,” Proc. Neural Information Processing Systems Conf., pp. 512518, 1999.
[30] T. Minka, “ExpectationMaximization As Lower Bound Maximization,” tutorial, http://wwwwhite.media.mit.edu/tpminka/ papers em.html, 1998.
[31] A. Jain and X. Lu, “Ethnicity Identification from Face Images,” Proc. SPIE, Defense and Security Symp., vol. 5404, pp. 114123, 2004.
[32] A.K. Jain and F. Farrokhina, “Unsupervised Texture Segmentation Using Gabor Filters,” Pattern Recognition, vol. 24, pp. 11671186, 1991.
[33] I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, second ed. Morgan Kaufmann, 2005.
[34] L. Reyzin and R.E. Schapire, “How Boosting the Margin Can Also Boost Classifier Complexity,” Proc. 22nd Int'l Conf. Machine Learning, pp. 753760, 2006.
[35] J. Platt, N. Cristianini, and J. Shawe, “Large Margin DAGs for Multiclass Classification,” Proc. Neural Information Processing Systems Conf., pp. 547553, 2000.