Do LLMs Act Like Rational Agents? Measuring Belief Coherence in Probabilistic Decision Making

Published: 02 Mar 2026, Last Modified: 10 Mar 2026ICLR 2026 Workshop AIMSEveryoneRevisionsCC BY 4.0
Keywords: Decision-Making, Rationality, Bayesian, LLM, Agent
TL;DR: We ask whether it is useful to treat an LLM as a rational utility maximizer with coherent beliefs and stable preferences, and develop a diagnostic framework that tests necessary implications of Bayesian utility maximization.
Abstract: Large language models (LLMs) are increasingly deployed as agents in high-stakes domains where optimal actions depend on both uncertainty about the world and consideration of utilities of different outcomes, yet their decision logic remains difficult to interpret. We study whether LLMs are rational utility maximizers with coherent beliefs and stable preferences. We consider behaviors of models for diagnosis challenge problems. The results provide insights about the relationship of LLM inferences to ideal Bayesian utility maximization for elicited probabilities and observed actions. Our approach provides falsifiable conditions under which the reported probabilities cannot correspond to the true beliefs of any rational agent. We apply this methodology to multiple medical diagnostic domains with evaluations across frontier LLMs. We discuss implications of the results and directions forward for uses of LLMs in guiding high-stakes decisions.
Track: Long Paper
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 96
Loading