Keywords: Tool-Call Agent, Iterative Learning
Abstract: Augmenting Large Language Models (LLMs) with external tools enables them to execute complex, multi-step tasks. However, tool learning is hampered by the static synthetic data pipelines where data generation and model training are executed as two separate, non-interactive processes. This approach fails to adaptively focus on a model's specific weaknesses and allows noisy labels to persist, degrading training efficiency.
We introduce \textbf{LoopTool}, a fully automated, model-aware data evolution framework that closes this loop by tightly integrating data synthesis and model training. LoopTool iteratively refines both the data and the model through three synergistic modules: (1) \textit{Greedy Capability Probing (GCP)} diagnoses the model's mastered and failed capabilities; (2) \textit{Judgement-Guided Label Verification (JGLV)} uses an open-source judge model to find and correct annotation errors, progressively purifying the dataset; and (3) \textit{Error-Driven Data Expansion (EDDE)} generates new, challenging samples based on identified failures. This closed-loop process operates within a cost-effective, open-source ecosystem, eliminating dependence on expensive closed-source APIs.
Experiments show that our 8B model trained with LoopTool significantly surpasses its 32B data generator and achieves new state-of-the-art results on the BFCL-v3 and ACEBench benchmarks for its scale. Our work demonstrates that closed-loop, self-refining data pipelines can dramatically enhance the tool-use capabilities of LLMs.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 6751
Loading