YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CVPR.2017.789
We introduce a new large-scale data set of video URLs with densely-sampled object bounding box annotations called YouTube-BoundingBoxes (YT-BB). The data set consists of approximately 380,000 video segments about 19s long, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera. The objects represent a subset of the COCO  label set. All video segments were human-annotated with high-precision classification labels and bounding boxes at 1 frame per second. The use of a cascade of increasingly precise human annotations ensures a label accuracy above 95% for every class and tight bounding boxes. Finally, we train and evaluate well-known deep network architectures and report baseline figures for per-frame classification and localization. We also demonstrate how the temporal contiguity of video can potentially be used to improve such inferences. The data set can be found at https://research.google.com/youtube-bb. We hope the availability of such large curated corpus will spur new advances in video object detection and tracking.
cameras, feature extraction, image classification, image segmentation, learning (artificial intelligence), object detection, object tracking, social networking (online), video signal processing
E. Real, J. Shlens, S. Mazzocchi, X. Pan and V. Vanhoucke, "YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, 2017, pp. 7464-7473.