Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models

ICLR 2025 Workshop BuildingTrust Submission64 Authors

10 Feb 2025 (modified: 06 Mar 2025)Submitted to BuildingTrustEveryoneRevisionsBibTeXCC BY 4.0
Track: Long Paper Track (up to 9 pages)
Keywords: Large language models, Uncertainty Quantification, Explanability, Model Response Uncertainty Quantification, Prompt Informativeness
TL;DR: This paper explains the relationship between the prompt informativeness and response uncertainty in large language models.
Abstract: Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established. Therefore, understanding how LLMs reason and make decisions is crucial for their safe deployment. This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt. Leveraging the insight that LLMs learn to infer latent concepts during pretraining, we propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty. We show that the uncertainty decreases as the prompt's informativeness increases, similar to epistemic uncertainty. Our detailed experimental results on real-world datasets validate our proposed model.
Submission Number: 64
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview