The Power of LLM-Generated Synthetic Data for Stance Detection in Online Political Discussions

Published: 09 Oct 2024, Last Modified: 04 Dec 2024SoLaR PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Technical
Keywords: large language models, stance detection, data augmentation, active learning, online political discussions
TL;DR: We study and show how to leverage LLM-generated synthetic data for stance detection in online discussions, which is a challenging stance detection task because of the broad range of debate questions.
Abstract: Stance detection holds great potential to improve online political discussions by being deployed in discussion platforms for purposes such as content moderation, topic summarisation or to facilitate more balanced discussions. Transformer-based models are typically employed directly for stance detection, requiring vast amounts of data. However, the wide variety of debate topics in online political discussions makes data collection particularly challenging. LLMs have revived stance detection, but their online deployment in online political discussions faces challenges like inconsistent outputs, biases, and vulnerability to adversarial attacks. We show how LLM-generated synthetic data can improve stance detection for online political discussions by using reliable traditional stance detection models for online deployment, while leveraging the text generation capabilities of LLMs for synthetic data generation in a secure offline environment. To achieve this, (i) we generate synthetic data for specific debate questions by prompting a Mistral-7B model and show that fine-tuning with the generated synthetic data can substantially improve the performance of stance detection, while remaining interpretable and aligned with real world data. (ii) using synthetic data as a reference, we can improve performance even further by identifying the most informative samples in an unlabelled dataset, i.e., those samples which the stance detection model is most uncertain about and can benefit from the most. By fine-tuning with both synthetic data and the most informative samples, we surpass the performance of the baseline model that is trained on true labels, while labelling considerably less data.
Submission Number: 18
Loading