The Community for Technology Leaders
2015 IEEE International Conference on Computer Vision (ICCV) (2015)
Santiago, Chile
Dec. 7, 2015 to Dec. 13, 2015
ISSN: 2380-7504
ISBN: 978-1-4673-8390-5
pp: 4525-4533
ABSTRACT
Egocentric videos are a valuable source of information as a daily log of our lives. However, large fraction of egocentric video content is typically irrelevant and boring to re-watch. It is an agonizing task, for example, to manually search for the moment when your daughter first met Mickey Mouse from hours-long egocentric videos taken at Disneyland. Although many summarization methods have been successfully proposed to create concise representations of videos, in practice, the value of the subshots to users may change according to their immediate preference/mood, thus summaries with fixed criteria may not fully satisfy users' various search intents. To address this, we propose a storyline representation that expresses an egocentric video as a set of jointly inferred, through MRF inference, story elements comprising of actors, locations, supporting objects and events, depicted on a timeline. We construct such a storyline with very limited annotation data (a list of map locations and weak knowledge of what events may be possible at each location), by bootstrapping the process with data obtained through focused Web image and video searches. Our representation promotes story-based search with queries in the form of AND-OR graphs, which span any subset of story elements and their spatio-temporal composition. We show effectiveness of our approach on a set of unconstrained YouTube egocentric videos of visits to Disneyland.
INDEX TERMS
Videos, Search problems, YouTube, Training, Semantics, Visualization, TV
CITATION

B. Xiong, G. Kim and L. Sigal, "Storyline Representation of Egocentric Videos with an Applications to Story-Based Search," 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 4525-4533.
doi:10.1109/ICCV.2015.514
188 ms
(Ver 3.3 (11022016))