$\textit{Who Speaks Matters}$: Analysing the Influence of the Speaker's Linguistic Identity on Hate Classification

ACL ARR 2025 February Submission5621 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) offer a lucrative promise for scalable content moderation, including hate speech detection. However, they are also known to be brittle and biased against marginalised communities and dialects. This requires their applications to high-stakes tasks like hate speech detection to be critically scrutinized. In this work, we investigate the robustness of hate speech classification using LLMs particularly when explicit and implicit markers of the speaker's ethnicity are injected into the input. For explicit markers, we inject a phrase that mentions the speaker's linguistic identity. For the implicit markers, we inject dialectal features. By analysing how frequently model outputs flip in the presence of these markers, we reveal varying degrees of brittleness across 3 LLMs and 1 LM and 5 linguistic identities. We find that the presence of implicit dialect markers in inputs causes model outputs to flip more than the presence of explicit markers. Further, the percentage of flips varies across ethnicities. Finally, we find that larger models are more robust. Our findings indicate the need for exercising caution in deploying LLMs for high-stakes tasks like hate speech detection.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: hate-speech detection, language/cultural bias analysis, sociolinguistics, model bias/fairness evaluation, human factors in NLP
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English, English British Dialect, Hinglish, Singaporean-English Dialect, African American Vernacular English, English Jamaican Dialect
Submission Number: 5621
Loading