DIVER : Large Language Model Decoding with Span-Level Mutual Information Verification

ICLR 2026 Conference Submission24186 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language model, decoding
Abstract: Large language models (LLMs) have shown impressive capabilities in adapting to various tasks when provided with task-specific instructions. However, LLMs using standard decoding strategies often struggle with deviations from the inputs. Intuitively, compliant LLM outputs should reflect the information present in the input, which can be measured by pointwise mutual information (PMI) scores. Therefore, we propose DIVER, a novel approach that enhances LLM Decoding through span-level PMI VERification. During inference, DIVER first identifies divergence steps that may lead to multiple candidate spans. Subsequently, it calculates the PMI scores by assessing the loglikelihood gains of the input if the candidate spans are generated. Finally, the optimal span is selected based on the PMI re-ranked output distributions. We evaluate our method across various downstream tasks, and empirical results demonstrate that DIVER significantly outperforms existing decoding methods in both performance and versatility
Primary Area: foundation or frontier models, including LLMs
Submission Number: 24186
Loading