The Community for Technology Leaders
2015 IEEE International Conference on Computer Vision (ICCV) (2015)
Santiago, Chile
Dec. 7, 2015 to Dec. 13, 2015
ISSN: 2380-7504
ISBN: 978-1-4673-8390-5
pp: 2668-2676
Generating captions to describe images is a fundamental problem that combines computer vision and natural language processing. Recent works focus on descriptive phrases, such as "a white dog" to explain the visual composites of an input image. The phrases can not only express objects, attributes, events, and their relations but can also reduce visual complexity. A caption for an input image can be generated by connecting estimated phrases using a grammar model. However, because phrases are combinations of various words, the number of phrases is much larger than the number of single words. Consequently, the accuracy of phrase estimation suffers from too few training samples per phrase. In this paper, we propose a novel phrase-learning method: Common Subspace for Model and Similarity (CoSMoS). In order to overcome the shortage of training samples, CoSMoS obtains a subspace in which (a) all feature vectors associated with the same phrase are mapped as mutually close, (b) classifiers for each phrase are learned, and (c) training samples are shared among co-occurring phrases. Experimental results demonstrate that our system is more accurate than those in earlier work and that the accuracy increases when the dataset from the web increases.
Training, Visualization, Learning systems, Neural networks, Grammar, Scalability, Feature extraction

Y. Ushiku, M. Yamaguchi, Y. Mukuta and T. Harada, "Common Subspace for Model and Similarity: Phrase Learning for Caption Generation from Images," 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 2668-2676.
251 ms
(Ver 3.3 (11022016))