The Community for Technology Leaders
Green Image
ISSN: 0162-8828
Xiaodan Liang , School of Data and Computer Science, Sun Yat-sen University, Guangzhou, Guangdong China (e-mail: xdliang328@gmail.com)
Ke Gong , Human Cyber Physical Intelligence Integration Lab, Sun Yat-Sen University, 26469 Guangzhou, Guangdong China 510275 (e-mail: kegong936@gmail.com)
Xiaohui Shen , Adobe Research, Adobe Systems Inc., San Jose, California United States 95110 (e-mail: xshen@adobe.com)
Liang Lin , School of Information Science and Technology, Sun Yat-sen University, Guangzhou, Guangdong China (e-mail: linliang@ieee.org)
ABSTRACT
Human parsing and pose estimation have recently received considerable interest due to their substantial application potentials. However, the existing datasets have limited numbers of images and annotations and lack a variety of human appearances and coverage of challenging cases in unconstrained environments. In this paper, we introduce a new benchmark named "Look into Person (LIP)" that provides a significant advancement in terms of scalability, diversity, and difficulty, which are crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels and 16 body joints, which are captured from a broad range of viewpoints, occlusions, and background complexities. Using these rich annotations, we perform detailed analyses of the leading human parsing and pose estimation approaches, thereby obtaining insights into the successes and failures of these methods. To further explore and take advantage of the semantic correlation of these two tasks, we propose a novel joint human parsing and pose estimation network to explore efficient context modeling, which can simultaneously predict parsing and pose with extremely high quality. Furthermore, we simplify the network to solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into the parsing results without resorting to extra supervision. The datasets, code and models are available at http://www.sysu-hcp.net/lip/.
INDEX TERMS
Pose estimation, Lips, Task analysis, Benchmark testing, Semantics, Image segmentation, Context modeling
CITATION

X. Liang, K. Gong, X. Shen and L. Lin, "Look into Person: Joint Body Parsing & Pose Estimation Network and a New Benchmark," in IEEE Transactions on Pattern Analysis & Machine Intelligence.
doi:10.1109/TPAMI.2018.2820063
159 ms
(Ver 3.1 (10032016))