MAGELLAN: Metacognitive predictions of learning progress guide autotelic LLM agents in large goal spaces
TL;DR: We introduce MAGELLAN, a metacognitive framework that lets LLM agents learn to predict their competence and learning progress online to guide their curriculum in large goal spaces.
Abstract: Open-ended learning agents must efficiently prioritize goals in vast possibility spaces, focusing on those that maximize learning progress (LP). When such autotelic exploration is achieved by LLM agents trained with online RL in high-dimensional and evolving goal spaces, a key challenge for LP prediction is modeling one’s own competence, a form of metacognitive monitoring. Traditional approaches either require extensive sampling or rely on brittle expert-defined goal groupings. We introduce MAGELLAN, a metacognitive framework that lets LLM agents learn to predict their competence and learning progress online. By capturing semantic relationships between goals, MAGELLAN enables sample-efficient LP estimation and dynamic adaptation to evolving goal spaces through generalization. In an interactive learning environment, we show that MAGELLAN improves LP prediction efficiency and goal prioritization, being the only method allowing the agent to fully master a large and evolving goal space. These results demonstrate how augmenting LLM agents with a metacognitive ability for LP predictions can effectively scale curriculum learning to open-ended goal spaces.
Lay Summary: Imagine an AI agent as a curious student exploring a vast library filled with countless books on different subjects. The student wants to learn as much as possible, but with limited time, they need to choose which books will teach them the most. This is exactly the challenge faced by AI agents designed for open-ended learning.
Traditional AI systems struggle with this problem because they either spend too much time testing every possible choice or rely on pre-programmed categories that don't adapt well to new situations. It's like having a student who either reads random pages from every book or only sticks to a rigid reading list that never changes.
Our solution, called MAGELLAN, gives AI agents a crucial ability: self-awareness about their own learning. Just as good students develop intuition about which subjects they're ready to tackle next, MAGELLAN helps AI agents predict how much they'll learn from different goals before committing significant time to them.
The key insight is that learning goals aren't isolated islands, they're connected in meaningful ways. For example, learning to ride a bicycle helps with learning to ride a motorcycle. MAGELLAN captures these relationships, allowing agents to make educated guesses about their progress on new goals based on their experience with related ones.
We tested this approach in a complex learning environment where goals constantly evolved and multiplied. While other methods struggled to keep up, MAGELLAN enabled our AI agent to efficiently prioritize its learning and eventually master the entire space of available goals.
This research shows how giving AI agents metacognitive abilities, essentially teaching them to think about their own thinking, can dramatically improve their ability to learn in open-ended, ever-changing environments. This could lead to more adaptable AI systems that continue learning and improving throughout their deployment.
Link To Code: https://github.com/flowersteam/MAGELLAN
Primary Area: Deep Learning->Large Language Models
Keywords: LLM agents, Open-Ended Learning, Learning Progress, Goal-conditionned RL, Automatic Curriculum Learning
Submission Number: 15991
Loading