Efficient Training-Free Online Routing for High-Volume Multi-LLM Serving
By: Fangzhou Wu, Sandeep Silwal
Potential Business Impact:
Saves money by picking the best AI for each question.
Increasing demand for Large Language Models (LLMs) services imposes substantial deployment and computation costs on providers. LLM routing offers a cost-efficient solution by directing queries to the optimal LLM based on model and query features. However, existing works primarily focus on offline scenarios and struggle to adapt to online settings with high query volume and constrained token budgets. In this work, we introduce the first training-free algorithm for online routing scenarios. Our algorithm leverages approximate nearest neighbor search to efficiently estimate query features and performs a one-time optimization over a small set of initial queries to learn a routing strategy that guides future routing. We provide theoretical guarantees demonstrating that our algorithm achieves a competitive ratio of $1 - o(1)$ under natural assumptions, which is further validated by extensive experiments across 3 benchmark datasets and 8 baselines, showing an average improvement of 3.55$\times$ in overall performance, 1.85$\times$ in cost efficiency, and nearly 4.25$\times$ in throughput. Our code is available at https://github.com/fzwark/PORT.
Similar Papers
Efficient Training-Free Online Routing for High-Volume Multi-LLM Serving
Databases
Smartly picks the best AI for your questions.
OmniRouter: Budget and Performance Controllable Multi-LLM Routing
Databases
Smarter AI chooses best tool, saves money.
Routing for Large ML Models
Networking and Internet Architecture
Makes AI training faster by smarter data sending.