The Sensitivity of Annotator Bias to Task DefinitionsDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: NLP models are biased by the data they are trained on, including how it is annotated, but NLP research increasingly examines the em social biases of models, often in the light of their training data. This paper is first to examine to what extent social bias is sensitive to how data is annotated. We do so by collecting annotations of arguments in the same documents following four different guidelines and from four different demographic annotator backgrounds. We show that annotations exhibit widely different levels of group disparity depending on which guidelines annotators follow. The differences are not explained by task complexity, but rather by characteristics of these groups, as previously identified by sociological studies.
0 Replies

Loading