Issue No. 04 - April (2013 vol. 35)
Yansong Feng , Inst. of Comput. Sci. & Technol., Peking Univ., Beijing, China
M. Lapata , Inst. for Language, Cognition & Comput., Univ. of Edinburgh, Edinburgh, UK
This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods.
Visualization, Humans, Databases, Vocabulary, Probabilistic logic, Data models, Noise measurement,topic models, Caption generation, image annotation, summarization
Yansong Feng, M. Lapata, "Automatic Caption Generation for News Images", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. , pp. 797-812, April 2013, doi:10.1109/TPAMI.2012.118