TAT: Improving Stance Detection on Social Media through Thought Alignment with LLMs

Published: 2025, Last Modified: 06 Nov 2025WWW (Companion Volume) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Stance detection, which identifies a given statement's stance toward a specific target, plays a crucial role in various fields. With the development of large language models (LLMs), researchers have sought to integrate them into stance detection systems in two main approaches: finetuning-based approaches, which leverage additional data generated by LLMs or directly finetune LLMs with existing datasets, and prompt engineering-based approaches, which use task-specific prompts to guide LLMs without additional training. However, these methods face significant challenges, including limited accuracy and complexity of the synthesized data, reliance on resource-intensive models, and inefficiencies during inference. To address these limitations, this paper proposes a novel framework that integrates thought-chain data augmentation to systematically enrich training data by generating logically consistent reasoning chains, and thought-aligned finetuning to internalize reasoning capabilities into the model by harmonizing reasoning-intensive and direct prediction paradigms. Experimental results demonstrate that the proposed approach achieves state-of-the-art performance in both in-target and cross-target settings, validating its effectiveness.
Loading