The Community for Technology Leaders
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
ISSN: 2160-7516
ISBN: 978-1-5386-0733-6
pp: 2225-2232
ABSTRACT
This paper presents a framework for saliency estimation and fixation prediction in videos. The proposed framework is based on a hierarchical feature representation obtained by stacking convolutional layers of independent subspace analysis (ISA) filters. The feature learning is thus unsupervised and independent of the task. To compute the saliency, we then employ a multiresolution saliency architecture that exploits both local and global saliency. That is, for a given image, an image pyramid is initially built. After that, for each resolution, both local and global saliency measures are computed to obtain a saliency map. The integration of saliency maps over the image pyramid provides the final video saliency. We first show that combining local and global saliency improves the results. We then compare the proposed model with several video saliency models and demonstrate that the proposed framework is capable of predicting video saliency effectively, outperforming all the other models.
INDEX TERMS
Videos, Computational modeling, Training, Visualization, Feature extraction, Predictive models, Estimation
CITATION

J. Wang, H. R. Tavakoli and J. Laaksonen, "Fixation Prediction in Videos Using Unsupervised Hierarchical Features," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, Hawaii, USA, 2017, pp. 2225-2232.
doi:10.1109/CVPRW.2017.276
84 ms
(Ver 3.3 (11022016))