Crowdsourcing for relevance evaluation
- 30 November 2008
- journal article
- Published by Association for Computing Machinery (ACM) in ACM SIGIR Forum
- Vol. 42 (2), 9-15
- https://doi.org/10.1145/1480506.1480508
Abstract
Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.Keywords
This publication has 3 references indexed in Scilit:
- TRECCommunications of the ACM, 2007
- Search Engines that Learn from Implicit FeedbackComputer, 2007
- AI Gets a BrainQueue, 2006