Keywords: Model Router, Large Language Model, LLM Reasoning, Efficient Reasoning, Reinforcement Learning
Abstract: Chain-of-thought has been proven essential for enhancing the complex reasoning abilities of Large Language Models (LLMs), but it also leads to high computational costs. Recent advances have explored the method to route queries among multiple models and proved it as a promising approach. However, previous works directly operate at the task level, i.e., assigning user queries to suitable LLMs, which does not allow hybrid LLMs to truly collaborate on finer-grained sub-tasks. Collaboration at the level of intermediate reasoning steps (thoughts) could enable more efficient coordination, but it also poses significant challenges for router scheduling, placing immense demands on the quality of task decomposition and the precision of the router. To address this, we propose **R2-Reasoner**, a novel framework centered around **a Reinforced Model Router** designed to efficiently scale LLM reasoning. This router orchestrates collaboration across 9 heterogeneous models, of whom the parameter scale ranges from less than 1B to hundreds of billions, by first breaking down a complex query into subtasks with a decomposer, and then assigning each subtask to the optimal model with a subtask allocator, balancing performance with cost. To train this router involves a 2-stage alternating process for the decomposer and the allocator, integrating supervised fine-tuning with reinforcement learning to enable effective self-supervised refinement. Extensive experiments across six challenging reasoning benchmarks demonstrate that R2-Reasoner reduces API costs by 84.46% compared with state-of-the-art baselines while maintaining competitive reasoning accuracy. Our framework paves the way for the development of more scalable and efficient reasoning systems. Our code is open-source at [https://anonymous.4open.science/r/R2_Reasoner](https://anonymous.4open.science/r/R2_Reasoner).
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 24566
Loading