Keywords: Self-improving agents, LLM agents
TL;DR: We propose a co-evolving agents framework in which a target agent and a failure agent learn together by transforming failures into informative hard negatives, yielding more robust and generalizable performance.
Abstract: The rapid progress of large foundation models has accelerated the development of task-specialized agents across diverse domains. However, the effectiveness of agents remains tightly coupled with the quality of training data, while curating task-specific datasets remains costly and often infeasible in real-world scenarios.
Recent work has explored self-improving agents that autonomously generate, refine, and re-train on their own trajectories. A prominent line of approaches further leverages preference optimization by pairing predicted trajectories with scarce ground-truth trajectories, enabling agents to learn directly from their own failures.
While these methods outperform supervised fine-tuning, their heavy reliance on predicted trajectories under limited ground-truth supervision leaves them prone to overfitting.
To address this, we propose a co-evolving agents framework in which a target agent improves jointly with an auxiliary failure agent. The failure agent learns through preference optimization over failure trajectories from both the target and itself, thereby generating hard negatives that are close to success yet remain failures.
Incorporating these informative hard negatives into the target agent’s optimization sharpens decision boundaries and enhances generalization.
Our comprehensive analysis and experiments across benchmark datasets show that our method not only show improved performance but also highlights that failures, instead of being used as-is, can be systematically transformed into structured and valuable learning signals in self-improving agents.
Supplementary Material: pdf
Primary Area: foundation or frontier models, including LLMs
Submission Number: 17796
Loading