The Community for Technology Leaders
Green Image
Issue No. 02 - Feb. (2013 vol. 35)
ISSN: 0162-8828
pp: 314-328
Yuewei Lin , Dept. of Comput. Sci. & Eng., Univ. of South Carolina, Columbia, SC, USA
Yuan Yan Tang , Dept. of Comput. & Inf. Sci., Univ. of Macau, Macau, China
Bin Fang , Coll. of Comput. Sci., Chongqing Univ., Chongqing, China
Zhaowei Shang , Coll. of Comput. Sci., Chongqing Univ., Chongqing, China
Yonghui Huang , Coll. of Comput. Sci., Chongqing Univ., Chongqing, China
Song Wang , Dept. of Comput. Sci. & Eng., Univ. of South Carolina, Columbia, SC, USA
ABSTRACT
This paper introduces a new computational visual-attention model for static and dynamic saliency maps. First, we use the Earth Mover's Distance (EMD) to measure the center-surround difference in the receptive field, instead of using the Difference-of-Gaussian filter that is widely used in many previous visual-attention models. Second, we propose to take two steps of biologically inspired nonlinear operations for combining different features: combining subsets of basic features into a set of super features using the Lm-norm and then combining the super features using the Winner-Take-All mechanism. Third, we extend the proposed model to construct dynamic saliency maps from videos by using EMD for computing the center-surround difference in the spatiotemporal receptive field. We evaluate the performance of the proposed model on both static image data and video data. Comparison results show that the proposed model outperforms several existing models under a unified evaluation setting.
INDEX TERMS
Computational modeling, Visualization, Histograms, Biological system modeling, Educational institutions, Humans, Earth,spatiotemporal receptive field (STRF), Visual attention, saliency maps, dynamic saliency maps, earth mover's distance (EMD)
CITATION
Yuewei Lin, Yuan Yan Tang, Bin Fang, Zhaowei Shang, Yonghui Huang, Song Wang, "A Visual-Attention Model Using Earth Mover's Distance-Based Saliency Measurement and Nonlinear Feature Combination", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. , pp. 314-328, Feb. 2013, doi:10.1109/TPAMI.2012.119
272 ms
(Ver )