Abstract: Self-improving agents aim to continuously acquire new capabilities with minimal supervision. However, current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities. We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent’s $\textit{intrinsic}$ ability to actively evaluate, reflect on, and adapt its own learning processes. Drawing inspiration from human metacognition, we introduce a formal framework comprising three components: $\textit{metacognitive knowledge}$ (self-assessment of capabilities, tasks, and learning strategies), $\textit{metacognitive planning}$ (deciding what and how to learn), and $\textit{metacognitive evaluation}$ (reflecting on learning experiences to improve future learning). Analyzing existing self-improving agents, we find they rely predominantly on $\textit{extrinsic}$ metacognitive mechanisms, which are fixed, human-designed loops that limit scalability and adaptability. Examining each component, we contend that many ingredients for intrinsic metacognition are already present. Finally, we explore how to optimally distribute metacognitive responsibilities between humans and agents, and robustly evaluate and improve intrinsic metacognitive learning, key challenges that must be addressed to enable truly sustained, generalized, and aligned self-improvement.
Lay Summary: **(1)** As AI agents become more autonomous, a major challenge is enabling them to self-improve without constant human oversight. Most current systems rely on fixed, externally designed self-improvement loops that do not adapt over time or across tasks, limiting their ability to adapt and scale in complex, changing environments.
**(2)** We propose a framework for intrinsic metacognitive learning, where agents reflect on what they know, how they learn, and how well their learning strategies are working, then adapt those strategies accordingly. We analyze existing LLM-based agents and show how some already exhibit early signs of this capability, while identifying the components that remain underdeveloped.
**(3)** This research matters because enabling agents to self-improve is key to long-term, general-purpose autonomy. Our work provides a roadmap for developing AI systems that can potentially exhibit sustained and robust self-improvement, but also safer and more aligned with human goals as they evolve.
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: self-improvement; intrinsic metacognitive learning
Submission Number: 354
Loading