Abstract: Stance detection aims to determine the attitude or viewpoint expressed in a document regarding a specific target. Recent advancements in Large Language Models (LLMs), such as Chain-of-Thought (CoT) prompting, have improved the reasoning capabilities of these models by integrating intermediate rationales. However, the efficacy of CoT can be limited by the model’s internal knowledge, resulting in inaccurate rationales that compromise the subsequent stance prediction. This limitation could further lead to hallucinations, where LLMs produce unfaithful responses and erroneous reasoning, affecting the output’s reliability and precision. Moreover, CoT can be challenging to implement on smaller language models with constrained knowledge and reasoning depth, which raises concerns about efficiency. In response to these issues, we propose the Ladder-of-Thought (LoT), a novel framework using knowledge as steps to elevate stance detection. LoT implements a triple-phase Progressive Optimization Framework: 1) External Knowledge Injection, which aims to enrich the model’s intrinsic knowledge base; 2) Intermediate Knowledge Generation, allowing the model to generate more accurate and dependable intermediate knowledge to enhance the downstream prediction; and 3) Downstream Fine-tuning & Prediction, which aims to improve the model’s prediction accuracy. This sequential approach symbolizes ascending a ladder, with each phase representing a progressive step towards achieving optimal reasoning and prediction performance. Our empirical results have demonstrated that LoT achieves state-of-the-art results in zero-shot/few-shot and in-target stance detection, marking a 16% improvement over ChatGPT and a 10% enhancement compared to ChatGPT with CoT on stance detection task.
Loading