Keywords: Language Agents, Data Synthesis, Reinforcement Learning
TL;DR: We propose early experience, a scalable, reward-free training paradigm where agents learn from their own rollouts. We design two methods under this paradigm-both improving success, robustness, and later RL performance across eight diverse benchmarks.
Abstract: A long-term goal of language agents is to learn and improve through their own experience, ultimately outperforming humans in complex, real-world tasks.
However, training agents from experience data with reinforcement learning remains difficult in many environments, which either lack verifiable rewards (e.g., websites) or require inefficient long-horizon rollouts (e.g., multi-turn tool use).
As a result, most current agents rely on supervised fine-tuning on expert data, which is difficult to scale and generalizes poorly. This limitation stems from the nature of expert demonstrations: they capture only a narrow range of scenarios, and expose the agent to limited environment diversity.
We address this limitation with a middle-ground paradigm we call *early experience*: interaction data generated by the agent's own actions, where the resulting future states serve as supervision without reward signals.
Within this paradigm we study two strategies of using such data: (1) Implicit world modeling, which uses collected states to ground the policy in environment dynamics; and (2) Self-reflection, where the agent learns from its suboptimal actions to improve reasoning and decision making.
We evaluate across eight diverse environments and multiple model families.
Our approaches consistently improve effectiveness and out-of-domain generalization, highlighting the value of early experience.
Moreover, in environments with verifiable rewards, our results provide promising signals that early experience offers a strong foundation for subsequent reinforcement learning, positioning it as a practical bridge between imitation learning and fully experience-driven agents.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 13137
Loading