Issue No. 11 - Nov. (2013 vol. 35)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2013.65
A. Gaidon , Xerox Res. Centre Eur., Meylan, France
Z. Harchaoui , INRIA Grenoble Rhone-Alpes, Montbonnot, France
C. Schmid , INRIA Grenoble Rhone-Alpes, Montbonnot, France
We address the problem of localizing actions, such as opening a door, in hours of challenging video data. We propose a model based on a sequence of atomic action units, termed "actoms," that are semantically meaningful and characteristic for the action. Our actom sequence model (ASM) represents an action as a sequence of histograms of actom-anchored visual features, which can be seen as a temporally structured extension of the bag-of-features. Training requires the annotation of actoms for action examples. At test time, actoms are localized automatically based on a nonparametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure. We present experimental results on two recent benchmarks for action localization "Coffee and Cigarettes" and the "DLSBP" dataset. We also adapt our approach to a classification-by-localization set-up and demonstrate its applicability on the challenging "Hollywood 2" dataset. We show that our ASM method outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method.
Training, Hidden Markov models, Visualization, Spatiotemporal phenomena, Adaptation models, Support vector machines, Histograms
A. Gaidon, Z. Harchaoui and C. Schmid, "Temporal Localization of Actions with Actoms," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. 11, pp. 2782-2795, 2013.