CONCUR: A Framework for Continual Constrained and Unconstrained Routing
By: Peter Baile Chen , Weiyue Li , Dan Roth and more
Potential Business Impact:
Helps AI choose the best way to solve problems.
AI tasks differ in complexity and are best addressed with different computation strategies (e.g., combinations of models and decoding methods). Hence, an effective routing system that maps tasks to the appropriate strategies is crucial. Most prior methods build the routing framework by training a single model across all strategies, which demands full retraining whenever new strategies appear and leads to high overhead. Attempts at such continual routing, however, often face difficulties with generalization. Prior models also typically use a single input representation, limiting their ability to capture the full complexity of the routing problem and leading to sub-optimal routing decisions. To address these gaps, we propose CONCUR, a continual routing framework that supports both constrained and unconstrained routing (i.e., routing with or without a budget). Our modular design trains a separate predictor model for each strategy, enabling seamless incorporation of new strategies with low additional training cost. Our predictors also leverage multiple representations of both tasks and computation strategies to better capture overall problem complexity. Experiments on both in-distribution and out-of-distribution, knowledge- and reasoning-intensive tasks show that our method outperforms the best single strategy and strong existing routing techniques with higher end-to-end accuracy and lower inference cost in both continual and non-continual settings, while also reducing training cost in the continual setting.
Similar Papers
Constraint-Aware Route Recommendation from Natural Language via Hierarchical LLM Agents
Artificial Intelligence
Finds best routes from your spoken requests.
Cost-Aware Contrastive Routing for LLMs
Machine Learning (CS)
Finds cheapest AI for your questions.
RCR-Router: Efficient Role-Aware Context Routing for Multi-Agent LLM Systems with Structured Memory
Computation and Language
Makes AI teams work smarter, using less computer words.