Keywords: Large Language Models, LLM Routing, Benchmark
Abstract: Large language model (LLM) routing assigns each query to the most suitable model from an ensemble. We introduce LLMRouterBench, a large-scale benchmark and unified framework for LLM routing. It comprises over 400K instances from 21 datasets and 33 models. Moreover, it provides comprehensive metrics for both performance-oriented and performance–cost trade-off routing, and integrates 10 representative routing baselines. Using LLMRouterBench, we systematically re-evaluate the field. While confirming strong model complementarity—the central premise of LLM routing—we find that many routing methods exhibit similar performance under unified evaluation, and several recent approaches, including commercial routers, fail to reliably outperform a simple baseline. Meanwhile, a substantial gap remains to the Oracle, driven primarily by persistent model-recall failures. We further show that backbone embedding models have limited impact, that larger ensembles exhibit diminishing returns compared to careful model curation, and that the benchmark also enables latency-aware analysis. All code and data are available at https://anonymous.4open.science/r/LLMRouterBench-F524
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking,evaluation,metrics,reproducibility
Contribution Types: Reproduction study, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 9981
Loading