Large Language Models as Readers for Bias Detection

ACL ARR 2025 February Submission5297 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Detecting bias in media content is essential for ensuring information integrity and promoting inclusivity. Traditional methods often analyze text from the writer's perspective, leaving the reader's perspective underexplored. This paper introduces an innovative approach that leverages Large Language Models (LLMs) as readers to generate reader-perspective comments for bias detection. The most beneficial comments are selected by a selector and then utilized by an LLM to detect bias in the original data. We conduct experiments on the BASIL (news bias) and BeyondGender (gender bias) datasets with Llama3.1-8B. The results reveal the effectiveness of our method, achieving comparable performance to GPT4's. The findings highlight the significance of emotion-related comments, which are generally more beneficial than value-related ones in bias detection. Moreover, the reader's gender may influence comment quality. In addition, comment selection ensures consistent performance regardless of model sizes and comment combinations, demonstrating robustness and reliability.
Paper Type: Short
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: hate-speech detection, misinformation detection and analysis
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 5297
Loading