2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)
Portland, OR, USA
June 23, 2013 to June 28, 2013
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CVPR.2013.342
In this paper, we develop a new model for recognizing human actions. An action is modeled as a very sparse sequence of temporally local discriminative key frames - collections of partial key-poses of the actor(s), depicting key states in the action sequence. We cast the learning of key frames in a max-margin discriminative framework, where we treat key frames as latent variables. This allows us to (jointly) learn a set of most discriminative key frames while also learning the local temporal context between them. Key frames are encoded using a spatially-localizable pose let-like representation with HoG and BoW components learned from weak annotations, we rely on structured SVM formulation to align our components and mine for hard negatives to boost localization performance. This results in a model that supports spatio-temporal localization and is insensitive to dropped frames or partial observations. We show classification performance that is competitive with the state of the art on the benchmark UT-Interaction dataset and illustrate that our model outperforms prior methods in an on-line streaming setting.
Video Analysis, Activity Recognition, Discriminative Keyframes
M. Raptis and L. Sigal, "Poselet Key-Framing: A Model for Human Activity Recognition," 2013 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Portland, OR, USA USA, 2013, pp. 2650-2657.