Adaptive LLM Routing under Budget Constraints
By: Pranoy Panda , Raghav Magazine , Chaitanya Devaguptapu and more
Potential Business Impact:
Chooses best AI for your question, saving time.
Large Language Models (LLMs) have revolutionized natural language processing, but their varying capabilities and costs pose challenges in practical applications. LLM routing addresses this by dynamically selecting the most suitable LLM for each query/task. Previous approaches treat this as a supervised learning problem, assuming complete knowledge of optimal query-LLM pairings. However, real-world scenarios lack such comprehensive mappings and face evolving user queries. We thus propose to study LLM routing as a contextual bandit problem, enabling adaptive decision-making using bandit feedback without requiring exhaustive inference across all LLMs for all queries (in contrast to supervised routing). To address this problem, we develop a shared embedding space for queries and LLMs, where query and LLM embeddings are aligned to reflect their affinity. This space is initially learned from offline human preference data and refined through online bandit feedback. We instantiate this idea through Preference-prior Informed Linucb fOr adaptive rouTing (PILOT), a novel extension of LinUCB. To handle diverse user budgets for model routing, we introduce an online cost policy modeled as a multi-choice knapsack problem, ensuring resource-efficient routing.
Similar Papers
Adaptive LLM Routing under Budget Constraints
Machine Learning (CS)
Chooses best AI for your question.
PersonalizedRouter: Personalized LLM Routing via Graph-based User Preference Modeling
Machine Learning (CS)
Chooses the best AI for your specific needs.
Neural Bandit Based Optimal LLM Selection for a Pipeline of Tasks
Computation and Language
Chooses best AI for each step of a task.