The Community for Technology Leaders
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Las Vegas, NV, United States
June 27, 2016 to June 30, 2016
ISSN: 1063-6919
ISBN: 978-1-4673-8851-1
pp: 4641-4650
ABSTRACT
With the recent popularity of animated GIFs on social media, there is need for ways to index them with rich meta-data. To advance research on animated GIF understanding, we collected a new dataset, Tumblr GIF (TGIF), with 100K animated GIFs from Tumblr and 120K natural language descriptions obtained via crowdsourcing. The motivation for this work is to develop a testbed for image sequence description systems, where the task is to generate natural language descriptions for animated GIFs or video clips. To ensure a high quality dataset, we developed a series of novel quality controls to validate free-form text input from crowd-workers. We show that there is unambiguous association between visual content and natural language descriptions in our dataset, making it an ideal benchmark for the visual content captioning task. We perform extensive statistical analyses to compare our dataset to existing image and video description datasets. Next, we provide baseline results on the animated GIF description task, using three representative techniques: nearest neighbor, statistical machine translation, and recurrent neural networks. Finally, we show that models fine-tuned from our animated GIF description dataset can be helpful for automatic movie description.
INDEX TERMS
Motion pictures, Visualization, Natural languages, Image sequences, Semantics, Crowdsourcing, Image segmentation
CITATION
Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, Jiebo Luo, "TGIF: A New Dataset and Benchmark on Animated GIF Description", 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 00, no. , pp. 4641-4650, 2016, doi:10.1109/CVPR.2016.502
160 ms
(Ver 3.3 (11022016))