Detecting and Understanding the Use of Recurring Arguments in Collections of Opinionated Texts with LLMs
Abstract: Automated large-scale analysis of online argumentation around contested issues like abortion requires detecting and understanding the use of recurring arguments. Despite substantial work in computational argumentation analysis, a significant gap remains in research exploring LLMs processing of argumentation on contentious issues. Given the increasing use of LLMs in potentially sensitive scenarios, including opinion analysis, a thorough and nuanced evaluation is timely and important. We address this gap using a dataset of over 2,000 opinion comments on polarizing topics and topic-specific argument lists, defining three tasks: detecting arguments in comments, extracting argument spans, and identifying whether an argument is supported or attacked. We compare four state-of-the-art LLMs and a fine-tuned RoBERTa baseline. While LLMs excel at binary support/attack decisions, they struggle to reliably detect arguments, and performance does not consistently improve with in-context learning. We conclude by discussing the implications of our findings for using LLMs for argument-based opinion mining.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: Argument Mining, Argument-Based Opinion Mining, LLMs, Argumentation
Languages Studied: Python
Submission Number: 2878
Loading