Augmenting Language Agents with Parametric Memory

Published: 08 Nov 2025, Last Modified: 08 Nov 2025ResponsibleFM @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: language agent, agent memory, model-based memory, reflection
Abstract: Large Language Models (LLMs) have demonstrated strong reasoning abilities, yet existing agent frameworks remain constrained by two limitations. First, they typically operate at the per-instance level, confining signals to individual problems and overlooking transferable patterns across tasks. Second, while some approaches attempt to incorporate global information through external memory, these are non-parametric in nature, and thus capture only shallow interactions across instances, failing to uncover deeper regularities. To overcome these limitations, we propose $\texttt{ParamAgent}$, a language agent framework that leverages a domain-adaptive parametric memory to internalize knowledge across samples into model parameters. In addition to capturing cross-sample regularities, $\texttt{ParamAgent}$ provides twofold flexibility: (i) the parametric module can supply different forms of knowledge depending on various domains, and (ii) the same module can be integrated with different base LLMs, making $\texttt{ParamAgent}$ broadly applicable. Moreover, $\texttt{ParamAgent}$ naturally promotes diversity of outputs by adjusting the sampling temperature of the parametric module. Experiments on programming, math reasoning, and multi-hop question answering benchmarks show that $\texttt{ParamAgent}$ consistently outperforms state-of-the-art baselines, surpassing the best baseline by up to $7.90\\%$, $9.41\\%$, and $24.30\\%$ respectively.
Submission Number: 52
Loading