The Community for Technology Leaders
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
ISSN: 2160-7516
ISBN: 978-1-5386-0733-6
pp: 1595-1603
ABSTRACT
In this paper, we present a Self-Supervised Neural Aggregation Network (SS-NAN) for human parsing. SS-NAN adaptively learns to aggregate the multi-scale features at each pixel "address". In order to further improve the feature discriminative capacity, a self-supervised joint loss is adopted as an auxiliary learning strategy, which imposes human joint structures into parsing results without resorting to extra supervision. The proposed SS-NAN is end-to-end trainable. SS-NAN can be integrated into any advanced neural networks to help aggregate features regarding the importance at different positions and scales and incorporate rich high-level knowledge regarding human joint structures from a global perspective, which in turn improve the parsing results. Comprehensive evaluations on the recent Look into Person (LIP) and the PASCAL-Person-Part benchmark datasets demonstrate the significant superiority of our method over other state-of-the-arts.
INDEX TERMS
Neural networks, Aggregates, Computer vision, Semantics, Training, Benchmark testing, Computer architecture
CITATION

J. Zhao et al., "Self-Supervised Neural Aggregation Networks for Human Parsing," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, Hawaii, USA, 2017, pp. 1595-1603.
doi:10.1109/CVPRW.2017.204
93 ms
(Ver 3.3 (11022016))