
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Fei Wang, Changshui Zhang, "Label Propagation through Linear Neighborhoods," IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 1, pp. 5567, January, 2008.  
BibTex  x  
@article{ 10.1109/TKDE.2007.190672, author = {Fei Wang and Changshui Zhang}, title = {Label Propagation through Linear Neighborhoods}, journal ={IEEE Transactions on Knowledge and Data Engineering}, volume = {20}, number = {1}, issn = {10414347}, year = {2008}, pages = {5567}, doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2007.190672}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Knowledge and Data Engineering TI  Label Propagation through Linear Neighborhoods IS  1 SN  10414347 SP55 EP67 EPD  5567 A1  Fei Wang, A1  Changshui Zhang, PY  2008 KW  Data mining KW  Mining methods and algorithms KW  Machine learning KW  Graph labeling VL  20 JA  IEEE Transactions on Knowledge and Data Engineering ER   
[1] M.F. Balcan, A. Blum, P.P. Choi, J. Lafferty, B. Pantano, M.R. Rwebangira, and X. Zhu, “Person Identification in Webcam Images: An Application of SemiSupervised Learning,” Proc. ICML Workshop Learning with Partially Classified Training Data, 2005.
[2] M. Belkin and P. Niyogi, “Laplacian Eigenmaps for Dimensionality Reduction and Data Representation,” Neural Computation, vol. 15, no. 6, pp. 13731396, 2003.
[3] M. Belkin, I. Matveeva, and P. Niyogi, “Regularization and SemiSupervised Learning on Large Graphs,” Proc. 17th Ann. Conf. Learning Theory (COLT '04), pp. 624638, 2004.
[4] M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples,” J. Machine Learning Research, vol. 7, pp.23992434, Nov. 2006.
[5] Y. Bengio, M. Monperrus, and H. Larochelle, “Nonlocal Estimation of Manifold Structure,” Neural Computation, vol. 18, no. 10, pp.25092528, 2006.
[6] A. Blum and T. Mitchell, “Combining Labeled and Unlabeled Data with CoTraining,” Proc. 11th Ann. Conf. Computational Learning Theory (COLT '98), pp. 92100, 1998.
[7] A. Blum and S. Chawla, “Learning from Labeled and Unlabeled Data Using Graph Mincuts,” Proc. 18th Int'l Conf. Machine Learning (ICML '01), pp. 1926, 2001.
[8] M.A. CarreiraPerpinan and R.S. Zemel, “Proximity Graphs for Clustering and Manifold Learning,” Advances in Neural Information Processing Systems 17, L.K. Saul, Y. Weiss, and L. Bottou, eds., pp.225232, MIT Press, 2005.
[9] O. Chapelle, J. Weston, and B. Schölkopf, “Cluster Kernels for SemiSupervised Learning,” Advances in Neural Information Processing Systems 15, S. Becker, S. Thrun, and K. Obermayer, eds., pp.601608, MIT Press, 2003.
[10] O. Chapelle, B. Schölkopf, and A. Zien, SemiSupervised Learning, p. 371. MIT Press, 2006.
[11] O. Delalleu, Y. Bengio, and N. Le Roux, “NonParametric Function Induction in SemiSupervised Learning,” Proc. 10th Int'l Workshop Artificial Intelligence and Statistics (AISTAT '05), pp. 96103, 2005.
[12] A.P. Dempster, N.M. Laird, and D.B. Rubin, “Maximum Likelihood from Incomplete Data via the EM Algorithm,” J. Royal Statistical Soc., Series B, vol. 39, no. 1, pp. 138, 1977.
[13] F.R.K. Chung, “Spectral Graph Theory,” CBMS Regional Conf. Series in Mathematics, vol. 92, published for the Conf. Board of the Mathematical Sciences, Washington, DC. 1997.
[14] G.H. Golub and C.F Van Loan, Matrix Computation, second ed., 1989.
[15] A.K. Jain and R.C Dubes, Algorithms for Clustering Data, Prentice Hall Advanced Reference Series. Prentice Hall, 1988.
[16] T. Joachims, “Transductive Inference for Text Classification Using Support Vector Machines,” Proc. 16th Int'l Conf. Machine Learning (ICML '99), pp. 200209, 1999.
[17] T. Joachims, “Transductive Learning via Spectral Graph Partitioning,” Proc. 20th Int'l Conf. Machine Learning (ICML '03), pp. 290297, 2003.
[18] N. Kambhatla and T.K. Leen, “Dimension Reduction by Local Principal Component Analysis,” Neural Computation, vol. 9, no. 7, pp. 14931516, 1997.
[19] A. Kapoor, Y. Qi, H. Ahn, and R.W. Picard, “Hyperparameter and Kernel Learning for Graph Based SemiSupervised Classification,” Advances in Neural Information Processing Systems, 2005.
[20] N.D. Lawrence and M.I. Jordan, “SemiSupervised Learning via Gaussian Processes,” Advances in Neural Information Processing Systems 17, L.K. Saul, Y. Weiss, and L. Bottou, eds., MIT Press, 2005.
[21] D.J. Miller and U.S. Uyar, “A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data,” Advances in Neural Information Processing Systems 9, M. Mozer, M.I. Jordan, and T.Petsche, eds., pp. 571577, MIT Press, 1997.
[22] K. Nigam, A.K. McCallum, S. Thrun, and T. Mitchell, “Text Classification from Labeled and Unlabeled Documents Using EM,” Machine Learning, vol. 39, no. 23, pp. 103134, 2000.
[23] J.R. Quinlan, “Introduction to Decision Trees,” Machine Learning, vol. 1, no. 1, pp. 81106, 1986.
[24] S.T. Roweis and L.K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, pp. 23232326, 2000.
[25] L.K. Saul, K.Q. Weinberger, J.H. Ham, F. Sha, and D.D. Lee, “Spectral Methods for Dimensionality Reduction,” Semisupervised Learning, O. Chapelle, B. Schölkopf, and A. Zien, eds. MIT Press, 2006.
[26] B. Schölkopf and A.J. Smola, Learning with Kernels. MIT Press, 2002.
[27] B. Shahshahani and D. Landgrebe, “The Effect of Unlabeled Samples in Reducing the Small Sample Size Problem and Mitigating the Hughes Phenomenon,” IEEE Trans. Geoscience and Remote Sensing, vol. 32, no. 5, pp. 10871095, 1994.
[28] M. Szummer and T. Jaakkola, “Partially Labeled Classification with Markov Random Walks,” Advances in Neural Information Processing Systems 14, T.G. Dietterich, S. Becker, and Z. Ghahramani, eds., pp. 945952, 2002.
[29] J.B. Tenenbaum, V. Silva, and J.C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol. 290, pp. 23192323, 2000.
[30] V.N. Vapnik, The Nature of Statistical Learning Theory. Springer, 1995.
[31] F. Wang and C. Zhang, “Label Propagation through Linear Neighborhoods,” Proc. 23rd Int'l Conf. Machine Learning (ICML '06), pp. 985992, 2006.
[32] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Schölkopf, “Learning with Local and Global Consistency,” Advances in Neural Information Processing Systems 16, S. Thrun, L. Saul, and B.Schölkopf, eds., pp. 321328, 2004.
[33] D. Zhou and B. Schölkopf, “Learning from Labeled and Unlabeled Data Using Random Walks,” Proc. 26th Pattern Recognition Symp. (DAGM '04), 2004.
[34] D. Zhou, B. Schölkopf, and T. Hofmann, “SemiSupervised Learning on Directed Graphs,” Advances in Neural Information Processing Systems 17, L.K., Saul, Y. Weiss, and L. Bottou, eds., pp.16331640, MIT Press, 2005.
[35] X. Zhu, Z. Ghahramani, and J. Lafferty, “SemiSupervised Learning Using Gaussian Fields and Harmonic Functions,” Proc. 20th Int'l Conf. Machine Learning (ICML '03), 2003.
[36] X. Zhu and Z. Ghahramani, “Learning from Labeled and Unlabeled Data with Label Propagation,” Technical Report CMUCALD02107, Carnegie Mellon Univ., 2002.
[37] X. Zhu and Z. Ghahramani, “Towards SemiSupervised Classification with Markov Random Fields,” Technical Report CMUCALD02106, Carnegie Mellon Univ., 2002.
[38] X. Zhu, J. Lafferty, and Z. Ghahramani, “SemiSupervised Learning: From Gaussian Fields to Gaussian Processes,” Technical Report CMUCS03175, Carnegie Mellon Univ., 2003.
[39] X. Zhu, “SemiSupervised Learning Literature Survey,” Computer Sciences Technical Report 1530, Univ. of Wisconsin, Madison, 2006.