When Disagreement Meets Noise: Noise Robust Annotator Embeddings for Subjective NLP

ACL ARR 2026 January Submission8995 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: annotator disagreement, annotator noise, contrastive learning, human-centered NLP, multi-annotator learning
Abstract: Subjective NLP tasks such as sentiment analysis and hate speech classification often involve inherent annotator disagreement, reflecting diverse perspectives shaped by annotators’ lived experiences. Although conventional approaches resolve disagreement through majority voting or aggregation, these methods risk erasing valuable nuances and minority viewpoints. Recent embedding-based/multitask models have advanced the modeling of annotator-specific judgments, yet their robustness to annotation noise remains underexplored. In this work, we systematically investigate how state-of-the-art multi-annotator learning models perform in the presence of noisy labels and observe a significant performance degradation under such conditions. To address this, we propose Noise Robust Annotator Embedding (NRA-Embed), which integrates Robust InfoNCE (RINCE) contrastive loss to enhance models' robustness under noisy annotation conditions.
Paper Type: Long
Research Area: Computational Social Science, Cultural Analytics, and NLP for Social Good
Research Area Keywords: hate-speech detection, stance detection, model bias/unfairness mitigation, ethical considerations in NLP applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 8995
Loading