2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2016)
Las Vegas, NV, United States
June 26, 2016 to July 1, 2016
In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the "one-shot-learning" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for "user independent" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.
Gesture recognition, Training, Indexes, Computer vision, Testing, Conferences
Jun Wan, Stan Z. Li, Yibing Zhao, Shuai Zhou, Isabelle Guyon, Sergio Escalera, "ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition", 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), vol. 00, no. , pp. 761-769, 2016, doi:10.1109/CVPRW.2016.100