The Community for Technology Leaders
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Boston, MA, USA
June 7, 2015 to June 12, 2015
ISSN: 1063-6919
ISBN: 978-1-4673-6963-3
pp: 1473-1482
Hao Fang , Microsoft Research, Beijing 100080, China
Saurabh Gupta , Microsoft Research, Beijing 100080, China
Forrest Iandola , Microsoft Research, Beijing 100080, China
Rupesh K. Srivastava , Microsoft Research, Beijing 100080, China
Li Deng , Microsoft Research, Beijing 100080, China
Piotr Dollar , Microsoft Research, Beijing 100080, China
Jianfeng Gao , Microsoft Research, Beijing 100080, China
Xiaodong He , Microsoft Research, Beijing 100080, China
Margaret Mitchell , Microsoft Research, Beijing 100080, China
John C. Platt , Microsoft Research, Beijing 100080, China
C. Lawrence Zitnick , Microsoft Research, Beijing 100080, China
Geoffrey Zweig , Microsoft Research, Beijing 100080, China
ABSTRACT
This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.
INDEX TERMS
CITATION

H. Fang et al., "From captions to visual concepts and back," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 1473-1482.
doi:10.1109/CVPR.2015.7298754
84 ms
(Ver 3.3 (11022016))