Abstract: Detecting bias in media content is crucial for maintaining information integrity and promoting inclusivity.
Traditional methods analyze text from the writer's perspective, which analyzes textual features directly from the writer’s intent, leaving the reader's perspective underexplored.
This paper investigates whether Large Language Models (LLMs) can be leveraged as readers for bias detection by generating reader-perspective comments.
Experiments are conducted on the BASIL (news bias) and BeyondGender (gender bias) datasets with LLMs Gemma-7B, Phi-3-3.8B, Llama3.1-8B, Llama3.1-70B, and GPT4. The results demonstrate the effectiveness of reader-perspective comments for open-source LLMs, achieving performance comparable to GPT4's.
The findings highlight the significance of emotion-related comments, which are generally more beneficial than value-related ones in bias detection.
In addition, experiments on Llamas show that comment selection ensures consistent performance regardless of model sizes and comment combinations. This study is particularly beneficial for small-size open-source LLMs.
Paper Type: Short
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: hate-speech detection, misinformation detection and analysis
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3565
Loading