Abstract: Models trained on crowdsourced labels may not reflect broader population views, because those who work as annotators do not represent the population. We propose Population-Aligned Instance Replication (PAIR), a method to address bias caused non non-representative annotator pools. Using a simulation study of offensive language and hate speech, we create two types of annotators with different labeling tendencies and generate datasets with varying proportions of the types. Models trained on unbalanced annotator pools show poor calibration compared to those trained on representative data. By duplicating labels from underrepresented annotator groups to match population proportions, PAIR reduces bias without collecting additional annotations. These results suggest that statistical techniques from survey research can improve model performance. We conclude with practical recommendations for improving the representativity of training data and model performance.
Paper Type: Short
Research Area: Human-Centered NLP
Research Area Keywords: human-in-the-loop; participatory/community-based NLP; values and culture; human-centered evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: english, not language specific
Submission Number: 3668
Loading