The Community for Technology Leaders
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Las Vegas, NV, United States
June 27, 2016 to June 30, 2016
ISSN: 1063-6919
ISBN: 978-1-4673-8851-1
pp: 1001-1009
ABSTRACT
We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robust Deep RankNet that, given a video, generates a ranked list of its segments according to their suitability as GIF. We train our model to learn what visual content is often selected for GIFs by using over 100K user generated GIFs and their corresponding video sources. We effectively deal with the noisy web data by proposing a novel adaptive Huber loss in the ranking formulation. We show that our approach is robust to outliers and picks up several patterns that are frequently present in popular animated GIFs. On our new large-scale benchmark dataset, we show the advantage of our approach over several state-of-the-art methods.
INDEX TERMS
Visualization, Neural networks, Robustness, Noise measurement, Social network services, Computational modeling, Training
CITATION
Michael Gygli, Yale Song, Liangliang Cao, "Video2GIF: Automatic Generation of Animated GIFs from Video", 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 00, no. , pp. 1001-1009, 2016, doi:10.1109/CVPR.2016.114
178 ms
(Ver 3.3 (11022016))