Position: Truly Self-Improving Agents Require Intrinsic Metacognitive Learning

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Self-improving agents aim to continuously acquire new capabilities with minimal supervision. However, current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities. We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent’s $\textit{intrinsic}$ ability to actively evaluate, reflect on, and adapt its own learning processes. Drawing inspiration from human metacognition, we introduce a formal framework comprising three components: $\textit{metacognitive knowledge}$ (self-assessment of capabilities, tasks, and learning strategies), $\textit{metacognitive planning}$ (deciding what and how to learn), and $\textit{metacognitive evaluation}$ (reflecting on learning experiences to improve future learning). Analyzing existing self-improving agents, we find they rely predominantly on $\textit{extrinsic}$ metacognitive mechanisms, which are fixed, human-designed loops that limit scalability and adaptability. Examining each component, we contend that many ingredients for intrinsic metacognition are already present. Finally, we explore how to optimally distribute metacognitive responsibilities between humans and agents, and robustly evaluate and improve intrinsic metacognitive learning, key challenges that must be addressed to enable truly sustained, generalized, and aligned self-improvement.
Lay Summary: **(1)** As AI agents become more autonomous, a major challenge is enabling them to self-improve without constant human oversight. Most current systems rely on fixed, externally designed self-improvement loops that do not adapt over time or across tasks, limiting their ability to adapt and scale in complex, changing environments. **(2)** We propose a framework for intrinsic metacognitive learning, where agents reflect on what they know, how they learn, and how well their learning strategies are working, then adapt those strategies accordingly. We analyze existing LLM-based agents and show how some already exhibit early signs of this capability, while identifying the components that remain underdeveloped. **(3)** This research matters because enabling agents to self-improve is key to long-term, general-purpose autonomy. Our work provides a roadmap for developing AI systems that can potentially exhibit sustained and robust self-improvement, but also safer and more aligned with human goals as they evolve.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: ZGQ5M
Permissions Form: pdf
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: self-improvement; intrinsic metacognitive learning
Submission Number: 354
Loading