ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: tool use; agentic AI; small language model;
Abstract: Large language models are powerful generalists, yet solving deep and complex problems such as those of the Humanity’s Last Exam (HLE) remains both conceptually challenging and computationally expensive. We show that small orchestrators managing other models and a variety of tools are able to both push the upper bound of intelligence and improve efficiency in solving difficult agentic tasks. We introduce ToolOrchestra, a method for training small orchestrators that coordinate the use of intelligent tools. ToolOrchestra makes explicit use of reinforcement learning with outcome-, efficiency-, and user-preference-aware rewards. Using ToolOrchestra, we produce Orchestrator, an 8B model that achieves higher accuracy at lower cost than previous tool-use agents while aligning with user preferences on which tools are to be used for a given query. On HLE, Orchestrator achieves a score of 37.1\%, outperforming GPT-5 (35.1\%) while being 2.5x more efficient. On $\tau^2$-Bench and FRAMES, Orchestrator surpasses GPT-5 by a wide margin while using only about 30\% of the cost. Extensive analysis shows that Orchestrator achieves the best trade-off between performance and cost under multiple metrics, and generalizes robustly to previously unseen tools. These results demonstrate that composing diverse tools with a lightweight orchestration model is both more efficient and more effective than existing methods, paving the way for practical and scalable tool-augmented reasoning systems. These results demonstrate that orchestrating diverse tools with lightweight agents is not only more efficient, but also more effective, paving the way for practical and scalable tool-augmented reasoning systems.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 13689
Loading