2015 International Conference on Big Data and Smart Computing (BigComp) (2015)
Jeju, South Korea
Feb. 9, 2015 to Feb. 11, 2015
Nur Aziza Azis , Computer Science Department, KAIST, Daejeon, Republic of Korea
Ho-Jin Choi , Computer Science Department, KAIST, Daejeon, Republic of Korea
Youssef Iraqi , ECE Department, Khalifa University, Abu Dhabi, United Arab Emirates
Advancement of RGB-D cameras that are capable of tracking human body movement in the form of a skeleton has contributed to growing interest in skeleton-based human action recognition. However, the tracking performance of a single camera is prone to occlusion and is view dependent. In this study, we use fusion skeletal data obtained from two views for recognizing human action. We perform a substitutive fusion based on joint tracking status and build a view-invariant action recognition system. The resulting fusion skeletal data are transformed into histogram of cubes as a frame level feature. Clustering is applied to build a dictionary of frame representatives, and actions are encoded as sequences of frame representatives. Finally, recognition is performed as a sequence matching task by using Dynamic Time Warping with K-nearest neighbor. Experimental results show that fusion skeletal data consistently give better recognition performance than their single view counterpart.
Joints, Cameras, Histograms, Three-dimensional displays, Vectors, Hip
N. A. Azis, H. Choi and Y. Iraqi, "Substitutive skeleton fusion for human action recognition," 2015 International Conference on Big Data and Smart Computing (BigComp)(BIGCOMP), Jeju, South Korea, 2015, pp. 170-177.