Exploiting contextual information to improve stance detection in informal political discourse with LLMs
Keywords: political stance detection, large language models, contextual prompting, user profiling, informal discourse, personalization in NLP, stance classification, online political forums, zero-shot learning
TL;DR: Adding user profile context to LLM prompts significantly improves political stance detection in informal online discourse.
Abstract: This study investigates the use of Large Language Models (LLMs) for political stance detection in informal online discourse, where language is often sarcastic, ambiguous, and context-dependent. We explore whether providing contextual information, specifically user profile summaries derived from historical posts, can improve classification accuracy. Using a real-world political forum dataset, we generate structured profiles that summarize users' ideological leaning, recurring topics, and linguistic patterns. We evaluate seven state-of-the-art LLMs across baseline and context-enriched setups through a comprehensive cross-model evaluation. Our findings show that contextual prompts significantly boost accuracy, with improvements ranging from +17.5\% to +38.5\%, achieving up to 74\% accuracy that surpasses previous approaches. We also analyze how profile size and post selection strategies affect performance, showing that strategically chosen political content yields better results than larger, randomly selected contexts. These findings underscore the value of incorporating user-level context to enhance LLM performance in nuanced political classification tasks.
Archival Status: Archival
Paper Length: Long Paper (up to 8 pages of content)
Submission Number: 325
Loading