This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Obtaining High-Quality Relevance Judgments Using Crowdsourcing
Sept.-Oct. 2012 (vol. 16 no. 5)
pp. 20-27
Jeroen B.P. Vuurens, The Hague University of Applied Science
Arjen P. de Vries, Centrum Wiskunde & Informatica
The performance of information retrieval (IR) systems is commonly evaluated using a test set with known relevance. Crowdsourcing is one method for learning the relevant documents to each query in the test set. However, the quality of relevance learned through crowdsourcing can be questionable, because it uses workers of unknown quality with possible spammers among them. To detect spammers, the authors' algorithm compares judgments between workers; they evaluate their approach by comparing the consistency of crowdsourced ground truth to that obtained from expert annotators and conclude that crowdsourcing can match the quality obtained from the latter.
Index Terms:
Accuracy,Internet,Conferences,Information retrieval,Unsolicited electronic mail,Detection algorithms,Reliability,spam,crowdsourcing,judgment,quality,relevance
Citation:
Jeroen B.P. Vuurens, Arjen P. de Vries, "Obtaining High-Quality Relevance Judgments Using Crowdsourcing," IEEE Internet Computing, vol. 16, no. 5, pp. 20-27, Sept.-Oct. 2012, doi:10.1109/MIC.2012.71
Usage of this product signifies your acceptance of the Terms of Use.