The Community for Technology Leaders
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Las Vegas, NV, United States
June 27, 2016 to June 30, 2016
ISSN: 1063-6919
ISBN: 978-1-4673-8851-1
pp: 4829-4837
ABSTRACT
Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.
INDEX TERMS
Image reconstruction, Computer architecture, Training, Microprocessors, Image color analysis, Neural networks, Feature extraction
CITATION

A. Dosovitskiy and T. Brox, "Inverting Visual Representations with Convolutional Networks," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, United States, 2016, pp. 4829-4837.
doi:10.1109/CVPR.2016.522
174 ms
(Ver 3.3 (11022016))