The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.08 - Aug. (2012 vol.34)
pp: 1576-1588
T. Guha , Dept. of Electr. & Comput. Eng., Univ. of British Columbia, Vancouver, BC, Canada
ABSTRACT
This paper explores the effectiveness of sparse representations obtained by learning a set of overcomplete basis (dictionary) in the context of action recognition in videos. Although this work concentrates on recognizing human movements-physical actions as well as facial expressions-the proposed approach is fairly general and can be used to address other classification problems. In order to model human actions, three overcomplete dictionary learning frameworks are investigated. An overcomplete dictionary is constructed using a set of spatio-temporal descriptors (extracted from the video sequences) in such a way that each descriptor is represented by some linear combination of a small number of dictionary elements. This leads to a more compact and richer representation of the video sequences compared to the existing methods that involve clustering and vector quantization. For each framework, a novel classification algorithm is proposed. Additionally, this work also presents the idea of a new local spatio-temporal feature that is distinctive, scale invariant, and fast to compute. The proposed approach repeatedly achieves state-of-the-art results on several public data sets containing various physical actions and facial expressions.
INDEX TERMS
video signal processing, dictionaries, face recognition, gesture recognition, image classification, image representation, image sequences, learning (artificial intelligence), pattern clustering, vector quantisation, vector quantization, sparse representation, human action recognition, human movement recognition, physical action, facial expression, classification problem, human action model, overcomplete dictionary learning framework, spatio-temporal descriptor, video sequence representation, dictionary element, clustering, Dictionaries, Vectors, Feature extraction, Videos, Detectors, Video sequences, Humans, spatio-temporal descriptors., Action recognition, dictionary learning, expression recognition, overcomplete, orthogonal matching pursuit, sparse representation
CITATION
T. Guha, "Learning Sparse Representations for Human Action Recognition", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.34, no. 8, pp. 1576-1588, Aug. 2012, doi:10.1109/TPAMI.2011.253
45 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool