On the Importance of Uncertainty in Decision-Making with Large Language Models

TMLR Paper2371 Authors

12 Mar 2024 (modified: 09 Jul 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We investigate the role of uncertainty in decision-making problems with natural language as input. For such tasks, using Large Language Models as agents has become the norm. However, none of the recent approaches employ any additional phase for estimating the uncertainty the agent has about the world during the decision-making task. We focus on a fundamental decision-making framework with natural language as input, which is the one of contextual bandits, where the context information consists of text. As a representative of the approaches with no uncertainty estimation, we consider an LLM agent with a greedy policy, which picks the action corresponding to the largest predicted reward. We compare this baseline to LLM agents that make active use of uncertainty estimation by integrating the uncertainty in a Thompson Sampling policy. We employ different techniques for uncertainty estimation, such as Laplace Approximation, Dropout, and Epinets. We empirically show on real-world data that the greedy policy performs worse than the Thompson Sampling policies. These findings suggest that, while overlooked in the LLM literature, uncertainty improves performance on bandit tasks with LLM agents.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready revision
Assigned Action Editor: ~Nino_Vieillard1
Submission Number: 2371
Loading