EvoTool: Self-Evolving Tool-Use Policy Optimization in LLM Agents via Blame-Aware Mutation and Diversity-Aware Selection

Published: 31 May 2026, Last Modified: 15 May 2026ACL 2026 MainEveryoneCC BY 4.0
Abstract: LLM-based agents depend on effective tooluse policies to solve complex tasks, yet optimizing these policies remains challenging due to delayed supervision and the difficulty of credit assignment in long-horizon trajectories. Existing optimization approaches tend to be either monolithic, which are prone to entangling behaviors, or single-aspect, which ignore cross-module error propagation. To address these limitations, we propose EVOTOOL, a selfevolving framework that optimizes a modular tool-use policy via a gradient-free evolutionary paradigm. EVOTOOL decomposes agent’s tooluse policy into four modules, including Planner, Selector, Caller, and Synthesizer, and iteratively improves them in a self-improving loop through three novel mechanisms. TrajectoryGrounded Blame Attribution uses diagnostic traces to localize failures to a specific module.Feedback-Guided Targeted Mutation then edits only that module via natural-language critique.Diversity-Aware Population Selection preserves complementary candidates to ensure solution diversity. Across four benchmarks, EVOTOOLoutperforms strong baselines by over 5 points on both GPT-4.1 and Qwen3-8B, while achieving superior efficiency and transferability.
Loading