HAPS: Hierarchical LLM Routing with Joint Architecture and Parameter Search
By: Zihang Tian , Rui Li , Jingsen Zhang and more
Potential Business Impact:
Finds best computer brain settings for any job.
Large language model (LLM) routing aims to exploit the specialized strengths of different LLMs for diverse tasks. However, existing approaches typically focus on selecting LLM architectures while overlooking parameter settings, which are critical for task performance. In this paper, we introduce HAPS, a hierarchical LLM routing framework that jointly searches over model architectures and parameters. Specifically, we use a high-level router to select among candidate LLM architectures, and then search for the optimal parameters for the selected architectures based on a low-level router. We design a parameter generation network to share parameters between the two routers to mutually enhance their capabilities. In the training process, we design a reward-augmented objective to effectively optimize our framework. Experiments on two commonly used benchmarks show that HAPS consistently outperforms strong routing baselines. We have released our code at https://github.com/zihangtian/HAPS.
Similar Papers
HierRouter: Coordinated Routing of Specialized Large Language Models via Reinforcement Learning
Computation and Language
Makes smart computer programs run faster and cheaper.
A Roadmap to Guide the Integration of LLMs in Hierarchical Planning
Artificial Intelligence
Helps computers plan tasks better using smart language.
Towards a General Framework for HTN Modeling with LLMs
Software Engineering
Helps AI plan tasks by breaking them down.