Fact-Augmented Lookahead for LLM Agents: Simple Online Memory, No Finetuning
TL;DR: An LLM agent learns verifiable “atomic facts” to ground lookahead planning, yielding higher-quality solutions.
Abstract: Large Language Models (LLMs) are increasingly capable but often require targeted guidance or extensive interaction history to plan effectively in complex, interactive environments. We introduce an LLM agent framework that enhances planning through in-context learning, facilitated by \emph{atomic fact} augmentation and a recursive, depth-limited lookahead. The agent extracts task-critical facts from its trajectories, validates candidates with a lightweight predictive-consistency filter (and optionally compresses them), and uses the resulting fact set to condition action proposal, single-step latent world-model simulation, and state-value estimation. Planning proceeds by simulating and evaluating candidate trajectories with the accumulated facts and recent history, enabling online improvement without weight updates. We provide abstraction-style motivation—treating facts as reducing state aliasing (proxy $\epsilon_{\mathrm{sim}}$) and fact-conditioned simulation as lowering one-step error (proxy $\delta_{\mathrm{model}}$)—without claiming formal guarantees. Empirically, on text FrozenLake variants, CrafterMini, and ALFWorld, the approach improves cumulative return over ReAct/Reflexion and search-only baselines.
Submission Number: 1408
Loading