To Labor is Not to Suffer: Exploration of Polarity Association Bias in LLMs for Sentiment Analysis

ACL ARR 2025 February Submission1648 Authors

14 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) are widely used for modeling sentiment trends on social media text. We examine whether LLMs have a polarity bias---positive or negative---when encountering specific types of lexical word mentions. Such polarity association bias could lead to the wrong classification of $\textit{ neutral}$ statements and thus a distorted estimation of sentiment trends. We estimate the severity of the polarity association bias across five widely used LLMs, identifying lexical word mentions spanning a diverse range of linguistic and psychological categories that correlate with this bias. Our results show a moderate to strong degree of $\textit{polarity association bias}$ in these LLMs.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Polarity Association Bias, Sentiment Analysis
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1648
Loading