The Community for Technology Leaders
2013 IEEE Conference on Computer Vision and Pattern Recognition (2013)
Portland, OR, USA USA
June 23, 2013 to June 28, 2013
ISSN: 1063-6919
pp: 2483-2490
ABSTRACT
The ubiquitous availability of Internet video offers the vision community the exciting opportunity to directly learn localized visual concepts from real-world imagery. Unfortunately, most such attempts are doomed because traditional approaches are ill-suited, both in terms of their computational characteristics and their inability to robustly contend with the label noise that plagues uncurated Internet content. We present CRANE, a weakly supervised algorithm that is specifically designed to learn under such conditions. First, we exploit the asymmetric availability of real-world training data, where small numbers of positive videos tagged with the concept are supplemented with large quantities of unreliable negative data. Second, we ensure that CRANE is robust to label noise, both in terms of tagged videos that fail to contain the concept as well as occasional negative videos that do. Finally, CRANE is highly parallelizable, making it practical to deploy at large scale without sacrificing the quality of the learned solution. Although CRANE is general, this paper focuses on segment annotation, where we show state-of-the-art pixel-level segmentation results on two datasets, one of which includes a training set of spatiotemporal segments from more than 20,000 videos.
INDEX TERMS
CITATION
Jay Yagnik, Rahul Sukthankar, Kevin Tang, Li Fei-Fei, "Discriminative Segment Annotation in Weakly Labeled Video", 2013 IEEE Conference on Computer Vision and Pattern Recognition, vol. 00, no. , pp. 2483-2490, 2013, doi:10.1109/CVPR.2013.321
173 ms
(Ver 3.3 (11022016))