This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Random Field Model for Integration of Local Information and Global Information
August 2008 (vol. 30 no. 8)
pp. 1483-1489
Takahiro Toyoda, Tokyo Institute of Technology, Kanagawa-ken
Osamu Hasegawa, Tokyo Institute of Technology, Yokohama
This paper presents a proposal of a general framework that explicitly models local information and global information in a conditional random field. The proposed method extracts global image features as well as local ones and uses them to predict the scene of the input image. Scene-based top-down information is generated based on the predicted scene. It represents a global spatial configuration of labels and category compatibility over an image. Incorporation of the global information helps to resolve local ambiguities and achieves locally and globally consistent image recognition. In spite of the model's simplicity, the proposed method demonstrates good performance in image labeling of two datasets.

[1] X. He, R.S. Zemel, and D. Ray, “Learning and Incorporating Top-Down Cues in Image Segmentation,” Proc. Ninth European Conf. Computer Vision, vol. 1, pp. 338-351, 2006.
[2] X. He, R.S. Zemel, and M.Á. Carreira-Perpiñán, “Multiscale Conditional Random Fields for Image Labeling,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 695-702, 2004.
[3] S. Kumar and M. Hebert, “A Hierarchical Field Framework for Unified Context-Based Classification,” Proc. 10th Int'l Conf. Computer Vision, vol. 2, pp. 1284-1291, 2005.
[4] J. Shotton, J. Winnand, C. Rother, and A. Criminisi, “Textonboost: Joint Appearance, Shape and Context Modeling for Multi-Class Object Recognition and Segmentation,” Proc. Ninth European Conf. Computer Vision, vol. 1, pp. 1-15, 2006.
[5] J. Lafferty, A. McCallum, and F. Pereira, “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,” Proc. 18th Int'l Conf. Machine Learning, pp. 282-289, 2001.
[6] S. Kumar and M. Hebert, “Discriminative Random Fields,” Int'l J. Computer Vision, vol. 68, no. 2, pp. 179-202, 2006.
[7] A. Torralba, K.P. Murphy, and W.T. Freeman, “Contextual Models for Object Detection Using Boosted Random Fields,” Advances in Neural Information Processing Systems, vol. 17, pp. 1401-1408, 2005.
[8] T. Toyoda, K. Tagami, and O. Hasegawa, “Integration of Top-Down and Bottom-Up Information for Image Labeling,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 1106-1113, 2006.
[9] P.F. Felzenszwalb and D.P. Huttenlocher, “Efficient Graph-Based Image Segmentation,” Int'l J. Computer Vision, vol. 59, no. 2, pp. 167-181, 2004.
[10] A. Levin and Y. Weiss, “Learning to Combine Bottom-Up and Top-Down Segmentation,” Proc. Ninth European Conf. Computer Vision, vol. 4, pp. 581-594, 2006.
[11] C.-C. Chang and C.-J. Lin, LIBSVM: A Library for Support Vector Machines, http://www.csie.ntu.edu.tw/ cjlinlibsvm, 2001.
[12] D. Hoiem, A.A. Efros, and M. Hebert, “Recovering Surface Layout from an Image,” Int'l J. Computer Vision, vol. 75, no. 1, pp. 151-172, 2007.

Index Terms:
Pixel classification, Markov random fields, Scene Analysis
Citation:
Takahiro Toyoda, Osamu Hasegawa, "Random Field Model for Integration of Local Information and Global Information," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 8, pp. 1483-1489, Aug. 2008, doi:10.1109/TPAMI.2008.105
Usage of this product signifies your acceptance of the Terms of Use.