Keywords: continual learning, LLM agents, in-context learning, human feedback, text-to-SQL
TL;DR: We propose an agent architecture and memory representation that enables continual learning of domain knowledge through human feedback in the text-to-SQL setting.
Abstract: Foundation models (FMs) can generate SQL queries from natural language questions but struggle with database-specific schemas and tacit domain knowledge. We introduce a framework for continual learning from human feedback in text-to-SQL, where a learning agent receives natural language feedback to refine queries and distills the revealed knowledge for reuse on future tasks. This distilled knowledge is stored in a structured memory, enabling the agent to improve execution accuracy over time. We design and evaluate multiple variations of a learning agent architecture that vary in how they capture and retrieve past experiences. Experiments on the BIRD benchmark Dev set show that memory-augmented agents, particularly the Procedural Agent, achieve significant accuracy gains and error reduction by leveraging human-in-the-loop feedback. Our results highlight the importance of transforming tacit human expertise into reusable knowledge, paving the way for more adaptive, domain-aware text-to-SQL systems that continually learn from a human-in-the-loop.
Serve As Reviewer: ~Sivapriya_Vellaichamy1
Submission Number: 9
Loading