2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
This paper presents a framework for saliency estimation and fixation prediction in videos. The proposed framework is based on a hierarchical feature representation obtained by stacking convolutional layers of independent subspace analysis (ISA) filters. The feature learning is thus unsupervised and independent of the task. To compute the saliency, we then employ a multiresolution saliency architecture that exploits both local and global saliency. That is, for a given image, an image pyramid is initially built. After that, for each resolution, both local and global saliency measures are computed to obtain a saliency map. The integration of saliency maps over the image pyramid provides the final video saliency. We first show that combining local and global saliency improves the results. We then compare the proposed model with several video saliency models and demonstrate that the proposed framework is capable of predicting video saliency effectively, outperforming all the other models.
Videos, Computational modeling, Training, Visualization, Feature extraction, Predictive models, Estimation
J. Wang, H. R. Tavakoli and J. Laaksonen, "Fixation Prediction in Videos Using Unsupervised Hierarchical Features," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, Hawaii, USA, 2017, pp. 2225-2232.