2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Oct. 22, 2017 to Oct. 29, 2017
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICCV.2017.597
In this paper, we address the problem of estimating the positions of human joints, i.e., articulated pose estimation. Recent state-of-the-art solutions model two key issues, joint detection and spatial configuration refinement, together using convolutional neural networks. Our work mainly focuses on spatial configuration refinement by reducing variations of human poses statistically, which is motivated by the observation that the scattered distribution of the relative locations of joints (e.g., the left wrist is distributed nearly uniformly in a circular area around the left shoulder) makes the learning of convolutional spatial models hard. We present a two-stage normalization scheme, human body normalization and limb normalization, to make the distribution of the relative joint locations compact, resulting in easier learning of convolutional spatial models and more accurate pose estimation. In addition, our empirical results show that incorporating multi-scale supervision and multi-scale fusion into the joint detection network is beneficial. Experiment results demonstrate that our method consistently outperforms state-of-the-art methods on the benchmarks.
convolution, learning (artificial intelligence), neural nets, pose estimation
K. Sun, C. Lan, J. Xing, W. Zeng, D. Liu and J. Wang, "Human Pose Estimation Using Global and Local Normalization," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 5600-5608.