This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Learning Actionlet Ensemble for 3D Human Action Recognition
May 2014 (vol. 36 no. 5)
pp. 1-1
Junsong Yuan, School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore
Jiang Wang, EECS Department, Northwestern University, Evanston, IL, USA
Zicheng Liu, , Microsoft Research, Redmond, WA, USA
Ying Wu, EECS Department, Northwestern University, Evanston, IL, USA
Human action recognition is an important yet challenging task. Human actions usually involve human-object interactions, highly articulated motions, high intra-class variations, and complicated temporal structures. The recently developed commodity depth sensors open up new possibilities of dealing with this problem by providing 3D depth data of the scene. This information not only facilitates a rather powerful human motion capturing technique, but also makes it possible to efficiently model human-object interactions and intra-class variations. In this paper, we propose to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints. The proposed model is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions. We evaluate the proposed approach on three challenging action recognition datasets captured by Kinect devices, a multiview action recognition dataset captured with Kinect device, and a dataset captured by a motion capture system. The experimental evaluations show that the proposed approach achieves superior performance to the state-of-the-art algorithms.
Index Terms:
Joints,Three-dimensional displays,Hidden Markov models,Robustness,Noise,Feature extraction,Gesture,Computer vision,Video analysis
Citation:
Junsong Yuan, Jiang Wang, Zicheng Liu, Ying Wu, "Learning Actionlet Ensemble for 3D Human Action Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 1-1, May 2014, doi:10.1109/TPAMI.2013.198
Usage of this product signifies your acceptance of the Terms of Use.