2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2015)
Boston, MA, USA
June 7, 2015 to June 12, 2015
Joe Yue-Hei Ng , University of Maryland, College Park, 20742, United States
Fan Yang , University of Maryland, College Park, 20742, United States
Larry S. Davis , University of Maryland, College Park, 20742, United States
Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.
Artificial neural networks,
J. Y. Ng, F. Yang and L. S. Davis, "Exploiting local features from deep networks for image retrieval," 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 2015, pp. 53-61.