The Community for Technology Leaders
2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (2018)
Lake Tahoe, NV, USA
Mar 12, 2018 to Mar 15, 2018
ISBN: 978-1-5386-4886-5
pp: 1616-1624
We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.
image motion analysis, image recognition, image representation, image sequences, learning (artificial intelligence), neural nets, video signal processing

J. Y. Ng, J. Choi, J. Neumann and L. S. Davis, "ActionFlowNet: Learning Motion Representation for Action Recognition," 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 2018, pp. 1616-1624.
258 ms
(Ver 3.3 (11022016))