The Community for Technology Leaders
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Boston, MA, USA
June 7, 2015 to June 12, 2015
ISSN: 1063-6919
ISBN: 978-1-4673-6963-3
pp: 2994-3002
Dingwen Zhang , Northwestern Polytechnical University, China
Junwei Han , Microsoft Research, China
ABSTRACT
With the goal of effectively identifying common and salient objects in a group of relevant images, co-saliency detection has become essential for many applications such as video foreground extraction, surveillance, image retrieval, and image annotation. In this paper, we propose a unified co-saliency detection framework by introducing two novel insights: 1) looking deep to transfer higher-level representations by using the convolutional neural network with additional adaptive layers could better reflect the properties of the co-salient objects, especially their consistency among the image group; 2) looking wide to take advantage of the visually similar neighbors beyond a certain image group could effectively suppress the influence of the common background regions when formulating the intra-group consistency. In the proposed framework, the wide and deep information are explored for the object proposal windows extracted in each image, and the co-saliency scores are calculated by integrating the intra-image contrast and intra-group consistency via a principled Bayesian formulation. Finally the window-level co-saliency scores are converted to the superpixel-level co-saliency maps through a foreground region agreement strategy. Comprehensive experiments on two benchmark datasets have demonstrated the consistent performance gain of the proposed approach.
INDEX TERMS
CITATION

D. Zhang, J. Han, Chao Li and J. Wang, "Co-saliency detection via looking deep and wide," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 2994-3002.
doi:10.1109/CVPR.2015.7298918
182 ms
(Ver 3.3 (11022016))