The Community for Technology Leaders
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Las Vegas, NV, United States
June 27, 2016 to June 30, 2016
ISSN: 1063-6919
ISBN: 978-1-4673-8851-1
pp: 270-279
ABSTRACT
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying highlevel context and improving the descriptive power of lowlevel and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
INDEX TERMS
Neurons, Feature extraction, Visualization, Context, Biological neural networks, Computer vision, Distribution functions
CITATION

L. Xie, L. Zheng, J. Wang, A. Yuille and Q. Tian, "InterActive: Inter-Layer Activeness Propagation," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, United States, 2016, pp. 270-279.
doi:10.1109/CVPR.2016.36
183 ms
(Ver 3.3 (11022016))