The Community for Technology Leaders
2015 IEEE International Conference on Computer Vision (ICCV) (2015)
Santiago, Chile
Dec. 7, 2015 to Dec. 13, 2015
ISSN: 2380-7504
ISBN: 978-1-4673-8390-5
pp: 4561-4569
ABSTRACT
Complex event retrieval is a challenging research problem, especially when no training videos are available. An alternative to collecting training videos is to train a large semantic concept bank a priori. Given a text description of an event, event retrieval is performed by selecting concepts linguistically related to the event description and fusing the concept responses on unseen videos. However, defining an exhaustive concept lexicon and pre-training it requires vast computational resources. Therefore, recent approaches automate concept discovery and training by leveraging large amounts of weakly annotated web data. Compact visually salient concepts are automatically obtained by the use of concept pairs or, more generally, n-grams. However, not all visually salient n-grams are necessarily useful for an event query -- some combinations of concepts may be visually compact but irrelevant -- and this drastically affects performance. We propose an event retrieval algorithm that constructs pairs of automatically discovered concepts and then prunes those concepts that are unlikely to be helpful for retrieval. Pruning depends both on the query and on the specific video instance being evaluated. Our approach also addresses calibration and domain adaptation issues that arise when applying concept detectors to unseen videos. We demonstrate large improvements over other vision based systems on the TRECVID MED 13 dataset.
INDEX TERMS
Videos, Detectors, Visualization, Training, Calibration, Tires, Computational modeling
CITATION

B. Singh, X. Han, Z. Wu, V. I. Morariu and L. S. Davis, "Selecting Relevant Web Trained Concepts for Automated Event Retrieval," 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 4561-4569.
doi:10.1109/ICCV.2015.518
219 ms
(Ver 3.3 (11022016))