The Community for Technology Leaders
Green Image
Issue No. 10 - Oct. (2012 vol. 34)
ISSN: 0162-8828
pp: 1927-1941
Yung-Yu Chuang , National Taiwan University, Taipei
Ming-Fang Weng , National Taiwan University, Taipei
ABSTRACT
The success of query-by-concept, proposed recently to cater to video retrieval needs, depends greatly on the accuracy of concept-based video indexing. Unfortunately, it remains a challenge to recognize the presence of concepts in a video segment or to extract an objective linguistic description from it because of the semantic gap, that is, the lack of correspondence between machine-extracted low-level features and human high-level conceptual interpretation. This paper studies three issues with the aim to reduce such a gap: 1) how to explore cues beyond low-level features, 2) how to combine diverse cues to improve performance, and 3) how to utilize the learned knowledge when applying it to a new domain. To solve these problems, we propose a framework that jointly exploits multiple cues across multiple video domains. First, recursive algorithms are proposed to learn both interconcept and intershot relationships from annotations. Second, all concept labels for all shots are simultaneously refined in a single fusion model. Additionally, unseen shots are assigned pseudolabels according to their initial prediction scores so that contextual and temporal relationships can be learned, thus requiring no additional human effort. Integration of cues embedded within training and testing video sets accommodates domain change. Experiments on popular benchmarks show that our framework is effective, achieving significant improvements over popular baselines.
INDEX TERMS
Context awareness, Indexing, Semantics, Feature extraction, Training data, Video annotation, Detectors, trecvid., Video annotation, concept detection, cross-domain learning, contextual correlation, temporal dependency
CITATION
Yung-Yu Chuang, Ming-Fang Weng, "Cross-Domain Multicue Fusion for Concept-Based Video Indexing", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 34, no. , pp. 1927-1941, Oct. 2012, doi:10.1109/TPAMI.2011.273
176 ms
(Ver )