The Community for Technology Leaders
Pattern Recognition, International Conference on (2002)
Quebec City, QC, Canada
Aug. 11, 2002 to Aug. 15, 2002
ISSN: 1051-4651
ISBN: 0-7695-1695-X
pp: 21009
Shingo Miyauchi , Osaka University
Akira Hirano , Osaka University
Noboru Babaguchi , Osaka University
Tadahiro Kitahashi , Osaka University
ABSTRACT
In this paper, we present an approach towards detecting semantical events from broadcasted sports video through collaborative multimedia analysis, called intermodal collaboration. Broadcasted video can be viewed as a set of multimodal streams such as visual, auditory, and textual (closed caption: CC) streams. Considering temporal dependency between their streams, we aim to improve the reliability and efficiency for event detection. This method consists of three procedural stages: CC stream analysis, auditory stream analysis, and visual stream analysis. In this method, we learn both frequently appearing keywords related to the event from the CC stream and feature parameters characterizing cheering and shouting from the auditory stream. The experimental results for broadcasted sports video of American football games indicate that our approach is effective for event detection.
INDEX TERMS
null
CITATION

S. Miyauchi, N. Babaguchi, T. Kitahashi and A. Hirano, "Collaborative Multimedia Analysis for Detecting Semantical Events from Broadcasted Sports Video," Pattern Recognition, International Conference on(ICPR), Quebec City, QC, Canada, 2002, pp. 21009.
doi:10.1109/ICPR.2002.1048476
87 ms
(Ver 3.3 (11022016))