The Community for Technology Leaders
Green Image
Issue No. 08 - Aug. (2016 vol. 38)
ISSN: 0162-8828
pp: 1692-1706
Natalia Neverova , INSA-Lyon, LIRIS, UMR5205, F-69621, Université de Lyon, CNRS, France
Christian Wolf , INSA-Lyon, LIRIS, UMR5205, F-69621, Université de Lyon, CNRS, France
Graham Taylor , School of Engineering, University of Guelph, Canada
Florian Nebout , Awabot, Villeurbanne, Rhône-Alpes, France
ABSTRACT
We present a method for gesture detection and localisation based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at three temporal scales. Key to our technique is a training strategy which exploits: i) careful initialization of individual modalities; and ii) gradual fusion involving random dropping of separate channels (dubbed ModDrop) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams. Fusing multiple modalities at several spatial and temporal scales leads to a significant increase in recognition rates, allowing the model to compensate for errors of the individual classifiers as well as noise in the separate channels. Futhermore, the proposed ModDrop training technique ensures robustness of the classifier to missing signals in one or several channels to produce meaningful predictions from any number of available modalities. In addition, we demonstrate the applicability of the proposed fusion scheme to modalities of arbitrary nature by experiments on the same dataset augmented with audio.
INDEX TERMS
Joints, Training, Streaming media, Feature extraction, Machine learning, Context
CITATION

N. Neverova, C. Wolf, G. Taylor and F. Nebout, "ModDrop: Adaptive Multi-Modal Gesture Recognition," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 38, no. 8, pp. 1692-1706, 2016.
doi:10.1109/TPAMI.2015.2461544
757 ms
(Ver 3.3 (11022016))