Learning from Supervision with Semantic and Episodic Memory: A Reflective Approach to Agent Adaptation
Abstract: Adapting large language models (LLMs) as agents for new tasks or domains remains a central challenge in NLP. Traditional approaches such as fine-tuning or parameter-efficient adaptation can be costly, inflexible, and opaque. In this work, we propose a flexible memory-augmented framework that enables LLM agents to continuously learn from both supervised signals and structured critiques without updating model parameters. Our framework distinguishes between $\textit{semantic}$ and $\textit{episodic memory}$, and introduces two forms of reflective insights, instance-level $\textit{critiques}$ and generalizable $\textit{principles}$, to capture and organize knowledge from labeled examples and their neighborhoods. We investigate how memory should be structured and how it can be effectively used to adapt agents to new scenarios. Across diverse tasks, our method yields up to 12.5% accuracy gains. We also introduce $\textit{suggestibility}$, a new metric quantifying how readily models internalize feedback.
Our findings highlight the promise of memory-driven reflective learning for building more adaptive and interpretable LLM agents.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Language Modeling, Interpretability and Analysis of Models for NLP, Machine Learning for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4967
Loading