Confidence-Guided Cross-Premise Contrastive Decoding for Enhanced LLMs Reasoning

ACL ARR 2025 February Submission3848 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) are prone to distraction by contextual information during reasoning. Previous work primarily focuses on improving the generation of the next token while overlooking the potential bias introduced by existing premises. In this paper, we propose a novel decoding method to mitigate this issue. We establish a framework that uses predicted logits to assess the model's confidence. By decomposing the full context into multiple premises, we gain a clearer understanding of the relevance of each premise to the question. When predicting the next token, we adjust the original model output by contrasting the most confident logits with the least confident ones. Our method effectively reveals how the model dynamically activates and adjusts its consideration of each premise as reasoning progresses.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: large language models, reasoning, contrastive decoding
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 3848
Loading