Gathering Context that Supports Decisions via Entropy Search with Language Models

Published: 12 Jun 2025, Last Modified: 21 Jun 2025EXAIT@ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Language Modeling
Keywords: context gathering, large language model agent, reasoning under uncertainty
TL;DR: We propose an entropy-based strategy for LLMs to ask targeted questions that reduce uncertainty and improve decision-making under partial context. Tested on 1D-ARC, GSM8K, and Fermi, our method outperforms strong baselines.
Abstract: Real-world decision making systems require background information about the environment to take effective actions. However, this information is frequently incomplete or costly to acquire. Rather than presuming complete context, an effective decision maker must actively gather relevant information through a sequence of targeted follow-up questions before making decisions. This paper presents a framework for adaptive information gathering using large language models (LLMs) as interactive decision-making agents. Guided by an information-theoretic objective, the LLM selects questions that minimize the entropy of the predicted optimal action distribution, effectively prioritizing information that reduces uncertainty. Our method enables instance-specific reasoning under uncertainty and improves decision quality through principled context acquisition. We evaluate our approach on modified versions of three standard benchmarks—1D-ARC, GSM8K, and Fermi—adapted to study partially observable contexts where relevant information must be actively gathered. We assess performance using state-of-the-art LLMs. Empirically, we find that our proposed Entropy Search strategy consistently outperforms strong baselines, demonstrating the effectiveness of uncertainty-guided information gathering for LLM-based decision support. Our implementation is available at https://anonymous.4open.science/r/info-gathering-047B/
Serve As Reviewer: ~Sicong_Huang1, ~Violet_Xiang1
Submission Number: 13
Loading