Keywords: Large Language Models, Multi-LLM Query Serving, LLM Routing, Cost-Aware Inference, Online Algorithm
TL;DR: We propose the first efficient, training-free online routing algorithm for high-volume LLM serving under token budget constraints, achieving significant improvements in both routing performance and cost efficiency.
Abstract: Increasing demand for Large Language Models (LLMs) querying services imposes substantial deployment and computation costs.
LLM routing offers a cost-efficient solution by directing queries to the optimal LLM based on model and query features.
However, existing works focus on offline scenarios and struggle to adapt to online settings with high query volume and constrained token budgets.
In this work, we introduce PORT, the first training-free algorithm designed for online routing scenarios.
Our algorithm leverages approximate nearest neighbor search to efficiently estimate query features and performs a one-time optimization over a small set of initial queries to learn a routing strategy that guides future routing.
We provide theoretical guarantees demonstrating that our algorithm achieves a competitive ratio of $1 - o(1)$ under natural assumptions, which is further validated by extensive experiments across 3 benchmark datasets and 8 baselines, showing an average improvement of **3.55$\times$** in overall performance, **1.85$\times$** in cost efficiency, and nearly **4.25$\times$** in throughput.
Our code is available at https://github.com/fzwark/PORT.
Submission Number: 49
Loading