Keywords: Large Language Models, LLM Routing, Cost-Aware Inference, Online Algorithm, Multi-LLM Serving
TL;DR: We propose the first efficient, training-free online routing algorithm for high-volume LLM serving under token budget constraints, achieving significant improvements in both routing performance and cost efficiency.
Abstract: Increasing demand for Large Language Models (LLMs) services imposes substantial deployment and computation costs on providers.
LLM routing offers a cost-efficient solution by directing queries to the optimal LLM based on model and query features.
However, existing works primarily focus on offline scenarios and struggle to adapt to online settings with high query volume and constrained token budgets.
In this work, we introduce the first training-free algorithm for online routing scenarios.
Our algorithm leverages approximate nearest neighbor search to efficiently estimate the features of queries and performs a one-time optimization over a small set of initial queries to learn a set of routing weights that guide future routing.
We provide a theoretical guarantee that the algorithm achieves a competitive ratio of $1 - o(1)$ under natural assumptions, which is further validated by extensive experiments across 3 benchmark datasets and 8 baselines, showing an average improvement of 3.55$\times$ in performance, 1.85$\times$ in cost efficiency, and nearly 4.25$\times$ in throughput.
Our code is available at https://github.com/fzwark/PORT.
Supplementary Material: zip
Primary Area: Infrastructure (e.g., libraries, improved implementation and scalability, distributed solutions)
Submission Number: 11413
Loading