The Community for Technology Leaders
Green Image
Issue No. 02 - Apr.-June (2014 vol. 21)
ISSN: 1070-986X
pp: 42-70
Kewei Tu , ShanghaiTech University, China
Meng Meng , University of California, Los Angeles
Mun Wai Lee , Intelligent Automation
Tae Eun Choe , ObjectVideo
Song-Chun Zhu , University of California, Los Angeles
This article proposes a multimedia analysis framework to process video and text jointly for understanding events and answering user queries. The framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events), and causal information (causalities between events and fluents) in the video and text. The knowledge representation of the framework is based on a spatial-temporal-causal AND-OR graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes, and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. The authors present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs, and the joint parse graph. Based on the probabilistic model, the authors propose a joint parsing system consisting of three modules: video parsing, text parsing, and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text, respectively. The joint inference module produces a joint parse graph by performing matching, deduction, and revision on the video and text parse graphs. The proposed framework has the following objectives: to provide deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; to perform parsing and reasoning across the spatial, temporal, and causal dimensions based on the joint S/T/C-AOG representation; and to show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where, and why. The authors empirically evaluated the system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results.
Text recognition, Semantics, Computer vision, Multimedia communication, Streaming media, Probabilistic logic, Computational modeling

K. Tu, M. Meng, M. W. Lee, T. E. Choe and S. Zhu, "Joint Video and Text Parsing for Understanding Events and Answering Queries," in IEEE MultiMedia, vol. 21, no. 2, pp. 42-70, 2014.
461 ms
(Ver 3.3 (11022016))