Non-Expert Evaluation of Summarization Systems is RiskyOpen Website

2010 (modified: 04 Sept 2019)Mturk@HLT-NAACL 2010Readers: Everyone
Abstract: We provide evidence that intrinsic evaluation of summaries using Amazon's Mechanical Turk is quite difficult. Experiments mirroring evaluation at the Text Analysis Conference's summarization track show that non-expert judges are not able to recover system rankings derived from experts.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview