The Community for Technology Leaders
2015 International Conference on Big Data and Smart Computing (BigComp) (2015)
Jeju, South Korea
Feb. 9, 2015 to Feb. 11, 2015
ISBN: 978-1-4799-7303-3
pp: 170-177
Nur Aziza Azis , Computer Science Department, KAIST, Daejeon, Republic of Korea
Ho-Jin Choi , Computer Science Department, KAIST, Daejeon, Republic of Korea
Youssef Iraqi , ECE Department, Khalifa University, Abu Dhabi, United Arab Emirates
ABSTRACT
Advancement of RGB-D cameras that are capable of tracking human body movement in the form of a skeleton has contributed to growing interest in skeleton-based human action recognition. However, the tracking performance of a single camera is prone to occlusion and is view dependent. In this study, we use fusion skeletal data obtained from two views for recognizing human action. We perform a substitutive fusion based on joint tracking status and build a view-invariant action recognition system. The resulting fusion skeletal data are transformed into histogram of cubes as a frame level feature. Clustering is applied to build a dictionary of frame representatives, and actions are encoded as sequences of frame representatives. Finally, recognition is performed as a sequence matching task by using Dynamic Time Warping with K-nearest neighbor. Experimental results show that fusion skeletal data consistently give better recognition performance than their single view counterpart.
INDEX TERMS
Joints, Cameras, Histograms, Three-dimensional displays, Vectors, Hip
CITATION

N. A. Azis, H. Choi and Y. Iraqi, "Substitutive skeleton fusion for human action recognition," 2015 International Conference on Big Data and Smart Computing (BigComp)(BIGCOMP), Jeju, South Korea, 2015, pp. 170-177.
doi:10.1109/35021BIGCOMP.2015.7072828
80 ms
(Ver 3.3 (11022016))