RouterEval: A Comprehensive Benchmark for Routing LLMs to Explore Model-level Scaling Up in LLMs
By: Zhongzhan Huang , Guoming Ling , Yupei Lin and more
Potential Business Impact:
Makes AI smarter by picking the best tool.
Routing large language models (LLMs) is a new paradigm that uses a router to recommend the best LLM from a pool of candidates for a given input. In this paper, our comprehensive analysis with more than 8,500 LLMs reveals a novel model-level scaling up phenomenon in Routing LLMs, i.e., a capable router can significantly enhance the performance of this paradigm as the number of candidates increases. This improvement can even surpass the performance of the best single model in the pool and many existing strong LLMs, confirming it a highly promising paradigm. However, the lack of comprehensive and open-source benchmarks for Routing LLMs has hindered the development of routers. In this paper, we introduce RouterEval, a benchmark tailored for router research, which includes over 200,000,000 performance records for 12 popular LLM evaluations across various areas such as commonsense reasoning, semantic understanding, etc., based on over 8,500 various LLMs. Using RouterEval, extensive evaluations of existing Routing LLM methods reveal that most still have significant room for improvement. See https://github.com/MilkThink-Lab/RouterEval for all data, code and tutorial.
Similar Papers
VL-RouterBench: A Benchmark for Vision-Language Model Routing
Machine Learning (CS)
Helps AI choose the best way to answer questions.
How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities
Computation and Language
Tests AI to choose the best tool for jobs.
Leveraging Uncertainty Estimation for Efficient LLM Routing
Networking and Internet Architecture
Makes AI give better answers for less money.