The Community for Technology Leaders
RSS Icon
Subscribe
Brussels, Belgium Belgium
Dec. 10, 2012 to Dec. 10, 2012
ISBN: 978-1-4673-5164-5
pp: 514-521
ABSTRACT
The crowd sourcing services became popular making it easy and fast to label datasets by multiple annotators in order to achieve supervised learning tasks. Unfortunately, in this context, annotators are not reliable as they may have different levels of experience or knowledge. Furthermore, the data to be labeled may also vary in their level of difficulty. How do we deal with hard data to label and unreliable annotators? In this paper, we present a probabilistic model to learn from multiple naive annotators, considering that annotators may decline to label an instance when they are unsure. Both errors and ignorance of annotators are integrated separately into the proposed Bayesian model. Experiments on several datasets show that our method achieves superior performance compared to other efficient learning algorithms.
INDEX TERMS
Supervised learning, Probabilistic logic, Bayesian methods, Approximation algorithms, Reliability, Estimation, Training, Bayesian Analysis, Crowdsourcing, Data Quality, Multiple Annotators, Ignorance
CITATION
Chirine Wolley, Mohamed Quafafou, "Learning from Multiple Annotators: When Data is Hard and Annotators are Unreliable", ICDMW, 2012, 2013 IEEE 13th International Conference on Data Mining Workshops, 2013 IEEE 13th International Conference on Data Mining Workshops 2012, pp. 514-521, doi:10.1109/ICDMW.2012.48
35 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool