Is External Information Useful for Stance Detection with LLMs?

ACL ARR 2025 February Submission6035 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In the stance detection task, a text is classified as either favorable, opposing, or neutral towards a target. Prior work suggests the use of external information, e.g., excerpts from Wikipedia, improves stance detection performance. However, whether or not such information can benefit large language models (LLMs) remains an unanswered question, despite their wide adoption in many reasoning tasks. In this study, we conduct a systematic evaluation on how external information can affect stance detection across eight LLMs and in three datasets with 12 targets. Surprisingly, we find that such information degrades performance in most cases, with macro F1 scores dropping by up to 15.9\%. This degradation is even more pronounced at a 28.1\% drop when stance biases are introduced in the external information, as LLMs tend to align their predictions with the stance of the provided information rather than the ground truth stance of the given text. We also find that fine-tuning mitigates bias but does not fully eliminate it. Our findings, in contrast to previous literature on BERT-based systems suggesting that external information enhances performance, highlight the risks of information biases in LLM-based stance classifiers.
Paper Type: Short
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: stance detection, argument schemes and reasoning
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 6035
Loading