Unveiling the Multi-Annotation Process: Examining the Influence of Annotation Quantity and Instance Difficulty on Model Performance

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Efficient Methods for NLP
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: multi-annotation, label-distribution, PVI, annotator-set, $\mathcal{V}$-Information, entropy, annotation budget, datamaps, cartography
TL;DR: We showcase, more annotations per instance in the dataset doesn't necessarily mean better performance of the model when trained on these datasets
Abstract: The NLP community has long advocated for the construction of multi-annotator datasets to better capture the nuances of language interpretation, subjectivity, and ambiguity. This paper conducts a retrospective study to show how performance scores can vary when a dataset expands from a single annotation per instance to multiple annotations. We propose a novel multi-annotator simulation process to generate datasets with varying annotation budgets. We show that similar datasets with the same annotation budget can lead to varying performance gains. Our findings challenge the popular belief that models trained on multi-annotation examples always lead to better performance than models trained on single or few-annotation examples.
Submission Number: 4848
Loading