Issue No. 04 - October-December (2005 vol. 12)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MMUL.2005.87
Alexandre R.J. Fran?ois , University of Southern California
Ram Nevatia , University of Southern California
Jerry Hobbs , Information Sciences Institute, USC
Robert C. Bolles , SRI International
The notion of "events" is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving. The ability to reason with events is a critical step toward video understanding. This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events—called Video Event Representation Language (VERL—and a companion annotation framework, called Video Event Markup Language (VEML). One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation. The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a "tailgating" event in surveillance video.
VERL, VEML, ISO, standardization, video annotation, video representation, modeling, sequencing, object ontology
R. C. Bolles, R. Nevatia, A. R. Fran?ois and J. Hobbs, "VERL: An Ontology Framework for Representing and Annotating Video Events," in IEEE MultiMedia, vol. 12, no. , pp. 76-86, 2005.