2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)
Columbus, OH, USA
June 23, 2014 to June 28, 2014
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.15
The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.
Streaming media, Labeling, Semantics, Image segmentation, Training, Measurement, Spatiotemporal phenomena
X. Liu, D. Tao, M. Song, Y. Ruan, C. Chen and J. Bu, "Weakly Supervised Multiclass Video Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 2014, pp. 57-64.