The Community for Technology Leaders
Green Image
Issue No. 02 - Apr.-June (2014 vol. 21)
ISSN: 1070-986X
pp: 42-70
Kewei Tu , ShanghaiTech University, China
Meng Meng , University of California, Los Angeles
Mun Wai Lee , Intelligent Automation
Tae Eun Choe , ObjectVideo
Song-Chun Zhu , University of California, Los Angeles
ABSTRACT
This article proposes a multimedia analysis framework to process video and text jointly for understanding events and answering user queries. The framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events), and causal information (causalities between events and fluents) in the video and text. The knowledge representation of the framework is based on a spatial-temporal-causal AND-OR graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes, and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. The authors present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs, and the joint parse graph. Based on the probabilistic model, the authors propose a joint parsing system consisting of three modules: video parsing, text parsing, and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text, respectively. The joint inference module produces a joint parse graph by performing matching, deduction, and revision on the video and text parse graphs. The proposed framework has the following objectives: to provide deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; to perform parsing and reasoning across the spatial, temporal, and causal dimensions based on the joint S/T/C-AOG representation; and to show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where, and why. The authors empirically evaluated the system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results.
INDEX TERMS
Text recognition, Semantics, Computer vision, Multimedia communication, Streaming media, Probabilistic logic, Computational modeling,query answering, multimedia, joint video and text parsing, knowledge representation, AND-OR graph, multimedia video analysis
CITATION
Kewei Tu, Meng Meng, Mun Wai Lee, Tae Eun Choe, Song-Chun Zhu, "Joint Video and Text Parsing for Understanding Events and Answering Queries", IEEE MultiMedia, vol. 21, no. , pp. 42-70, Apr.-June 2014, doi:10.1109/MMUL.2014.29
187 ms
(Ver 3.3 (11022016))