How Many Ratings per Item are Necessary for Reliable Significance Testing?

ACL ARR 2025 February Submission2687 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Most machine learning evaluation is based on the assumption that machine and human responses are repeatable enough to be measured against data with unitary, authoritative, "gold standard" responses, via simple metrics such as accuracy, precision, and recall. However, AI models have multiple sources of stochasticity. And the human raters who create gold standards tend to disagree with each other, often in meaningful ways. Thus, a single output response per input item may not provide enough information. We introduce methods for determining whether an (existing or planned) evaluation dataset has enough responses per item to reliably compare the performance of one model to another. We apply our methods to several of very few extant gold standard test sets with multiple disaggregated responses per item and show that there are usually not enough responses per item to reliably compare the performance of one model against another. Our methods also allow us to estimate the number of responses per item for hypothetical datasets with similar response distributions to the existing datasets we study. When two models are very far apart in their predictive performance, fewer raters are needed to confidently compare them, as expected. However, as the models draw closer, we find that a larger number of raters than are currently typical in annotation collection are needed to ensure that the power analysis correctly reflects the difference in performance.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: metrics; reproducibility; statistical testing for evaluation; evaluation
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 2687
Loading