Uncertainty Quantification via Reasoning–Explanation Symmetry in LLMs

12 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Uncertainty Quantification, Reasoning–Explanation Symmetry, Large Language Models, Natural Language Inference
TL;DR: This paper leverages reasoning–explanation symmetry in large language models to achieve more reliable uncertainty quantification with few samples.
Abstract: Uncertainty quantification (UQ) for large language model (LLM) outputs has attracted increasing attention, as it is crucial for hallucination detection and selective generation; however, existing semantic methods based on cross-output consistency require multiple sampling and thus incur additional cost. We hypothesize that, for reliable answers, LLMs exhibit consistent forward reasoning and backward explanation paths. Building on this, we propose Reasoning--Explanation Symmetry (RES) to quantify uncertainty from the answer itself without multiple sampling: for each question, we first generate structured reasoning and an answer, then condition on the answer to generate a structured explanation; bidirectional natural language inference (NLI) assesses the semantic entailment between the two to construct a symmetry score. RES yields more accurate estimates with small sampling counts and offers stronger interpretability. We evaluate RES on six datasets for both uncertainty quantification and best-answer selection, and the results demonstrate significant advantages on complex reasoning tasks.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 4506
Loading