Self-Tooling Agent: Dynamically Extending Agent Capabilities through Scientific Tool Synthesis and Invocation
Keywords: Agent, Tool use, Reinforcement learning
Abstract: Tools are essential for defining an agent's capabilities, yet a fundamental challenge remains: general-purpose agents lack expert tools, while specialized scientific agents rely on manually-crafted toolsets that are expensive to build and do not generalize across domains. This tool creation bottleneck limits agent adaptability and performance on novel tasks. To address this challenge, we introduce the Self-Tooling Agent (STA), an agentic framework where the policy LLM learns to dynamically arbitrate between invoking existing tools and synthesizing new, specialized ones as needed. Specifically, the training dataset is generated by reverse-engineering contexts from expert tools sourced from multiple scientific agents, while a dynamic, interactive environment provides a sand-boxed space for tool execution and registration. The framework trains the policy LLM using a two-stage process: supervised fine-tuning is used for syntax learning, while reinforcement learning with a principled, multi-component reward function optimizes the LLM's strategic decision-making. Extensive evaluations on a diverse suite of benchmarks, from complex scientific QA to standard function-calling leaderboards, demonstrate that the proposed STA significantly outperforms baselines that rely on fixed toolsets, including specialized agents and powerful proprietary models. This work establishes that empowering an agent to autonomously expand its own capabilities is a critical step towards creating more adaptable and resourceful scientific agents.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 5420
Loading