The Community for Technology Leaders
Green Image
Issue No. 03 - March (2016 vol. 38)
ISSN: 0162-8828
pp: 563-577
Yanwei Fu , Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
Timothy M. Hospedales , , School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, United Kingdom
Tao Xiang , , School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, United Kingdom
Jiechao Xiong , , School of Mathematical Sciences, Peking University, China
Shaogang Gong , , School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, United Kingdom
Yizhou Wang , National Engineering Laboratory for Video Technology Cooperative Medianet Innovation Center Key Laboratory of Machine Perception (MoE) School of EECS, Peking University, Beijing, China
Yuan Yao , , School of Mathematical Sciences, Peking University, China
ABSTRACT
The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.
INDEX TERMS
Visualization, Robustness, Predictive models, Ranking (statistics), Crowdsourcing, Training data, Noise measurement,robust ranking, Subjective visual properties, outlier detection,regularisation path, Subjective visual properties, outlier detection, robust ranking, robust learning to rank
CITATION
Yanwei Fu, Timothy M. Hospedales, Tao Xiang, Jiechao Xiong, Shaogang Gong, Yizhou Wang, Yuan Yao, "Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 38, no. , pp. 563-577, March 2016, doi:10.1109/TPAMI.2015.2456887
341 ms
(Ver 3.3 (11022016))