IUQ: Interrogative Uncertainty Quantification for Long-Form Large Language Model Generation

ACL ARR 2026 January Submission6304 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: uncertainty quantification, natural language explanations, knowledge inducing
Abstract: Despite the rapid advancement of Large Language Models (LLMs), uncertainty quantification in LLM generation is a persistent challenge. While recent methods have achieved remarkable accuracy by limiting LLMs to generate short or constrained answer sets, the most common usage of LLMs is for long and free-form generation, where the underlying semantics are multifaceted and linguistic structure is complex. One major complication emerged from this use case is the tendency of LLMs to produce semantically coherent yet factually incorrect responses. To tackle this challenge, this paper introduces Interrogative Uncertainty Quantification (IUQ), a novel framework that leverages inter-sample consistency and intra-sample faithfulness to quantify the uncertainty in long-form LLM outputs. By utilizing an interrogate-respond paradigm, our method provides reliable measures of claim-level uncertainty and the model's faithfulness. Experimental results across diverse model families and model sizes demonstrate that IUQ outperforms baselines by at least 1.7% on average over long-form generation datasets.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Dialogue and Interactive Systems,Generation,Interpretability and Analysis of Models for NLP,Language Modeling,Question Answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 6304
Loading