Exploiting feature Representations Through Similarity Learning and Ranking Aggregation for Person Re-identification
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (2017)
Washington, DC, DC, USA
May 30, 2017 to June 3, 2017
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/FG.2017.133
Person re-identification has received special attentionby the human analysis community in the last few years.To address the challenges in this field, many researchers haveproposed different strategies, which basically exploit eithercross-view invariant features or cross-view robust metrics. Inthis work we propose to combine different feature representationsthrough ranking aggregation. Spatial information, whichpotentially benefits the person matching, is represented usinga 2D body model, from which color and texture informationare extracted and combined. We also consider contextualinformation (background and foreground data), automaticallyextracted via Deep Decompositional Network, and the usage ofConvolutional Neural Network (CNN) features. To describe thematching between images we use the polynomial feature map,also taking into account local and global information. Finally,the Stuart ranking aggregation method is employed to combinecomplementary ranking lists obtained from different featurerepresentations. Experimental results demonstrated that weimprove the state-of-the-art on VIPeR and PRID450s datasets,achieving 58.77% and 71.56% on top-1 rank recognitionrate, respectively, as well as obtaining competitive results onCUHK01 dataset.
J. C. Jacques, X. Baro and S. Escalera, "Exploiting feature Representations Through Similarity Learning and Ranking Aggregation for Person Re-identification," 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)(FG), Washington, DC, DC, USA, 2017, pp. 302-309.