A Practical Guide to Robust Retrieval-Augmented LLM Agents

Published: 03 Mar 2026, Last Modified: 09 Mar 2026ICLR 2026 Workshop MemAgents withconditionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Retrieval-Augmented Generation, Experience Retrieval, Episodic Memory, In-Context Learning, Agents
TL;DR: We study retrieval-augmented LLM agents and show that simple episodic experience retrieval (especially when used during training) significantly improves generalization to unseen tasks.
Abstract: Thanks to large-scale pretraining and parameter-efficient finetuning, quickly adapting off-the-shelf moderately-sized LLMs to a set of agentic tasks within a given environment is relatively straightforward. But when asked to perform new, unseen tasks, even within the same environment, trained models often fail to generalize. In this work, we investigate how retrieval can be integrated into an agentic training pipeline, such that trained models can retrieve and learn in-context from newly-collected experience. We review multiple design choices across \textit{environment formulation}, \textit{training scheme}, and \textit{inference process} that enable agents to benefit from retrieval and seamlessly adapt to unseen tasks, resulting in significantly improved performance on unseen tasks compared to baselines.
Submission Number: 111
Loading