2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Las Vegas, NV, United States
June 27, 2016 to June 30, 2016
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CVPR.2016.9
We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox.
Context, Google, Visualization, Training, Machine learning, Recurrent neural networks, Automobiles
J. Mao, J. Huang, A. Toshev, O. Camburu, A. Yuille and K. Murphy, "Generation and Comprehension of Unambiguous Object Descriptions," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, United States, 2016, pp. 11-20.