This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Linear Neighborhood Propagation and Its Applications
September 2009 (vol. 31 no. 9)
pp. 1600-1615
Jingdong Wang, Microsoft Research Asia, Beijing
Fei Wang, Tsinghua University, Beijing
Changshui Zhang, Tsinghua University, Beijing
Helen C. Shen, The Hong Kong University of Science and Technology, Hong Kong
Long Quan, The Hong Kong University of Science and Technology, Hong Kong
In this paper, a novel graph-based transductive classification approach, called Linear Neighborhood Propagation, is proposed. The basic idea is to predict the label of a data point according to its neighbors in a linear way. This method can be cast into a second-order intrinsic Gaussian Markov random field framework. Its result corresponds to a solution to an approximate inhomogeneous biharmonic equation with Dirichlet boundary conditions. Different from existing approaches, our approach provides a novel graph structure construction method by introducing multiple-wise edges instead of pairwise edges, and presents an effective scheme to estimate the weights for such multiple-wise edges. To the best of our knowledge, these two contributions are novel for semi-supervised classification. The experimental results on image segmentation and transductive classification demonstrate the effectiveness and efficiency of the proposed approach.

[1] http://people.csail.mit.edu/jrennie20newsgroups /, 2007.
[2] http://research.microsoft.com/vision/cambridge/ i3l/segmentationgrabcut.htm, 2009.
[3] S. Agarwal, K. Branson, and S. Belongie, “Higher Order Learning with Graphs,” Proc. 23rd Int'l Conf. Machine Learning, pp.17-24, 2006.
[4] S. Barré, http://www.barre.nom.fr/medicalsamples, 2009.
[5] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp.711-720, July 1997.
[6] M. Belkin, I. Matveeva, and P. Niyogi, “Regularization and Semi-Supervised Learning on Large Graphs,” Proc. 17th Ann. Conf. Learning Theory, pp.624-638, 2004.
[7] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” Advances in Neural Information Processing Systems, vol. 14, pp.585-591, 2001.
[8] M. Belkin, P. Niyogi, and V. Sindhwani, “On Manifold Regularization,” Proc. 10th Int'l Workshop Artificial Intelligence and Statistics, pp.17-24, 2005.
[9] A. Blum and S. Chawla, “Learning from Labeled and Unlabeled Data Using Graph Mincuts,” Proc. 18th Int'l Conf. Machine Learning, pp.19-26, 2001.
[10] A. Blum, J.D. Lafferty, M.R. Rwebangira, and R. Reddy, “Semi-Supervised Learning Using Randomized Mincuts,” Proc. 21st Int'l Conf. Machine Learning, 2004.
[11] A. Blum and T.M. Mitchell, “Combining Labeled and Unlabeled Sata with Co-Training,” Proc. 11th Ann. Conf. Learning Theory, pp.92-100, 1998.
[12] Y. Boykov and M.-P. Jolly, “Interactive Graph Cuts for Optimal Boundary and Region Segmentation of Objects in N-D Images,” Proc. Sixth Int'l Conf. Computer Vision, pp.105-112, 2001.
[13] O. Chapelle, J. Weston, and B. Schölkopf, “Cluster Kernels for Semi-Supervised Learning,” Advances in Neural Information Processing Systems, vol. 15, pp.585-592, 2002.
[14] O. Delalleau, Y. Bengio, and N.L. Roux, “Efficient Non-Parametric Function Induction in Semi-Supervised Learning,” Proc. 10th Int'l Workshop Artificial Intelligence and Statistics, pp.96-103, 2005.
[15] A. Fujino, N. Ueda, and K. Saito, “A Hybrid Generative/Discriminative Approach to Semi-Supervised Classifier Design,” Proc. 20th Nat'l Conf. Artificial Intelligence, pp.764-769, 2005.
[16] L. Grady, “Multilabel Random Walker Image Segmentation Using Prior Models,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.763-770, 2005.
[17] L. Grady, “Random Walks for Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp.1768-1783, Nov. 2006.
[18] L. Grady, T. Schiwietz, S. Aharon, and R. Westermann, “Random Walks for Interactive Alpha-Matting,” Proc. Fifth IASTED Int'l Conf. Visualization, Imaging, and Image Processing, pp.423-429, 2005.
[19] J. He, M. Li, H. Zhang, H. Tong, and C. Zhang, “Manifold-Ranking Based Image Retrieval,” Proc. 12th ACM Int'l Conf. Multimedia, pp.9-16, 2004.
[20] T. Joachims, “Transductive Inference for Text Classification Using Support Vector Machines,” Proc. 16th Int'l Conf. Machine Learning, pp.200-209, 1999.
[21] T. Joachims, “Transductive Learning via Spectral Graph Partitioning,” Proc. 20th Int'l Conf. Machine Learning, pp.290-297, 2003.
[22] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul, “An Introduction to Variational Methods for Graphical Models,” Machine Learning, vol. 37, no. 2, pp.183-233, 1999.
[23] V. Kolmogorov and R. Zabih, “Multi-Camera Scene Reconstruction via Graph Cuts,” Proc. Seventh European Conf. Computer Vision, pp.82-96, 2002.
[24] V. Kwatra, A. Schödl, I.A. Essa, G. Turk, and A.F. Bobick, “Graphcut Textures: Image and Video Synthesis Using Graph Cuts,” ACM Trans. Graphics, vol. 22, no. 3, pp.277-286, 2003.
[25] J. Lee, J. Wang, C. Zhang, and Z. Bian, “Visual Object Recognition Using Probabilistic Kernel Subspace Similarity,” Pattern Recognition, vol. 38, no. 7, pp.997-1008, 2005.
[26] A. Levin, D. Lischinski, and Y. Weiss, “A Closed Form Solution to Natural Image Matting,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.61-68, 2006.
[27] Y. Li, J. Sun, C.-K. Tang, and H.-Y. Shum, “Lazy Snapping,” ACM Trans. Graphics, vol. 23, no. 3, pp.303-308, 2004.
[28] D.R. Martin, C. Fowlkes, D. Tal, and J. Malik, “A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics,” Proc. Eighth Int'l Conf. Computer Vision, pp.416-425, 2001.
[29] S.A. Nene, S.K. Nayar, and H. Murase, “Columbia Object Image Library: COIL-20,” technical report, Dept. of Computer Science, Columbia Univ., 1996.
[30] K. Nigam, “Using Unlabeled Data to Improve Text Classification,” PhD dissertation, Dept. of Computer Science, Carnegie Mellon Univ., 2001.
[31] J. Nocedal and S.J. Wright, Numerical Optimization. Springer-Verlag, 2006.
[32] C. Rosenberg, M. Hebert, and H. Schneiderman, “Semi-Supervised Self-Training of Object Detection Models,” Proc. Seventh IEEE Workshops Application of Computer Vision, pp.29-36, 2005.
[33] C. Rother, V. Kolmogorov, and A. Blake, “‘GrabCut’: Interactive Foreground Extraction Using Iterated Graph Cuts,” ACM Trans. Graphics, vol. 23, no. 3, pp.309-314, 2004.
[34] S.T. Roweis and L.K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, no. 5500, pp.2323-2326, 2000.
[35] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications. Chapman & Hall, 2005.
[36] F. Samaria and A. Harter, “Parameterisation of a Stochastic Model for Human Face Identification,” Proc. IEEE Workshop Applications of Computer Vision, Dec. 1994.
[37] L.K. Saul and S.T. Roweis, “Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifold,” J.Machine Learning Research, vol. 4, pp.119-155, 2003.
[38] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J.Cognitive Neuroscience, vol. 3, no. 1, pp.71-86, 1991.
[39] F. Wang, J. Wang, and C. Zhang, “Spectral Feature Analysis,” Proc. Int'l Joint Conf. Neural Networks, pp.1971-1976, 2005.
[40] F. Wang and C. Zhang, “Label Propagation through Linear Neighborhoods,” IEEE Trans. Knowledge and Data Engineering, vol. 20, no. 1, pp.55-67, Jan. 2008.
[41] Q. Wu, W. Dou, Y. Chen, and J.-M. Constans, “Fuzzy Segementaion of Cerebral Tumorous Tissues in MR Images via Support Vector Machine and Fuzzy Clustering,” Proc. 11th World Congress of Int'l Fuzzy Systems Assoc., 2005.
[42] M.-H. Yang, “Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods,” Proc. Fifth IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp.215-220, 2002.
[43] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Schölkopf, “Learning with Local and Global Consistency,” Advances in Neural Information Processing Systems, vol. 16, 2003.
[44] D. Zhou, J. Huang, and B. Schölkopf, “Learning with Hypergraphs: Clustering, Classification, and Embedding,” Advances in Neural Information Processing Systems, vol. 19, pp.1601-1608, 2006.
[45] D. Zhou and B. Schölkopf, “Learning from Labeled and Unlabeled Data Using Random Walks,” Proc. 26th DAGM Symp. Pattern Recognition, pp.237-244, 2004.
[46] X. Zhu, “Semi-Supervised Learning Literature Survey,” Technical Report 1530, Dept. of Computer Sciences, Univ. of Wisconsin-Madison, 2006.
[47] X. Zhu, Z. Ghahramani, and J. Lafferty, “Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions,” Proc. 20th Int'l Conf. Machine Learning, pp.912-919, 2003.

Index Terms:
Gaussian Markov random fields, linear neighborhood propagation, transductive classification, image segmentation.
Citation:
Jingdong Wang, Fei Wang, Changshui Zhang, Helen C. Shen, Long Quan, "Linear Neighborhood Propagation and Its Applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 9, pp. 1600-1615, Sept. 2009, doi:10.1109/TPAMI.2008.216
Usage of this product signifies your acceptance of the Terms of Use.