Keywords: LLM, proactive communication, decision theory
Abstract: Large language model (LLM) agents are increasingly used to assist people with complex tasks, but real-world user queries are often underspecified. When information is missing, agents face a dilemma: act autonomously and risk costly mistakes, or ask too many clarifying questions and frustrate the user. We propose a decision-theoretical framework for adaptive communication that dynamically determines when clarification is necessary based on three contextual factors: query ambiguity, task risk, and user cognitive load.
Our approach instantiates this framework with a Value of Information (VoI) method that, at inference time, explicitly weighs the expected utility of clarification against its communication cost. Unlike existing confidence thresholds or heuristic prompting approaches, our method requires no task-specific tuning and adapts flexibly across domains and stakes. In experiments on 20 Questions, medical diagnosis, flight recommendation, and Webshop, our adaptive strategies consistently achieve higher utility than baselines, asking fewer unnecessary queries and requiring no hand-tuned thresholds. These results establish a principled foundation for building LLM agents that are not only competent actors, but also strategic communicators able to adapt their behavior to user context and task stakes for more reliable real-world collaboration.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19448
Loading