Prompting the Unknown: Understanding Response Uncertainty in Large Language Models

ACL ARR 2026 January Submission10870 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: uncertainty quantification; large language models; explainability
Abstract: Large language models (LLMs) are widely used in decision-making across diverse domains. Ensuring the generation of safe and reliable responses is critical for the effective deployment of LLM-based applications, particularly in high-stakes domains such as healthcare and finance. Most of these applications typically use carefully crafted prompts to guide response generation; however, the relationship between prompts and the reliability of LLM-generated responses is not yet fully understood. To address this gap, we propose a novel prompt-response concept model that explains the relationship between the amount of task-relevant information (informativeness) provided in the prompt and the LLM-generated response uncertainty by decomposing response uncertainty into four distinct sources: prompt underspecification, model quality, task variability, and semantic redundancy. We prove that response uncertainty decreases as prompt informativeness or model quality increases, mirroring the behavior of epistemic uncertainty in probabilistic models. Our experimental results on real-world datasets further validate our proposed model and corroborate the theoretical results.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: uncertainty quantification
Contribution Types: Model analysis & interpretability, Theory
Languages Studied: English
Submission Number: 10870
Loading