Evaluation of Commonsense Knowledge with Mechanical TurkOpen Website

2010 (modified: 16 Jul 2019)Mturk@HLT-NAACL 2010Readers: Everyone
Abstract: Efforts to automatically acquire world knowledge from text suffer from the lack of an easy means of evaluating the resulting knowledge. We describe initial experiments using Mechanical Turk to crowdsource evaluation to non-experts for little cost, resulting in a collection of factoids with associated quality judgements. We describe the method of acquiring usable judgements from the public and the impact of such large-scale evaluation on the task of knowledge acquisition.
0 Replies

Loading