R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing
By: Tianyu Fu , Yi Ge , Yichen You and more
Potential Business Impact:
Makes smart AI faster and cheaper to use.
Large Language Models (LLMs) achieve impressive reasoning capabilities at the cost of substantial inference overhead, posing substantial deployment challenges. Although distilled Small Language Models (SLMs) significantly enhance efficiency, their performance suffers as they fail to follow LLMs' reasoning paths. Luckily, we reveal that only a small fraction of tokens genuinely diverge reasoning paths between LLMs and SLMs. Most generated tokens are either identical or exhibit neutral differences, such as minor variations in abbreviations or expressions. Leveraging this insight, we introduce **Roads to Rome (R2R)**, a neural token routing method that selectively utilizes LLMs only for these critical, path-divergent tokens, while leaving the majority of token generation to the SLM. We also develop an automatic data generation pipeline that identifies divergent tokens and generates token-level routing labels to train the lightweight router. We apply R2R to combine R1-1.5B and R1-32B models from the DeepSeek family, and evaluate on challenging math, coding, and QA benchmarks. With an average activated parameter size of 5.6B, R2R surpasses the average accuracy of R1-7B by 1.6x, outperforming even the R1-14B model. Compared to R1-32B, it delivers a 2.8x wall-clock speedup with comparable performance, advancing the Pareto frontier of test-time scaling efficiency. Our code is available at https://github.com/thu-nics/R2R.
Similar Papers
Route-and-Reason: Scaling Large Language Model Reasoning with Reinforced Model Router
Computation and Language
Smarter AI uses small AI for easy tasks.
SplitReason: Learning To Offload Reasoning
Computation and Language
Smart AI asks bigger AI for hard math help.
Self-Route: Automatic Mode Switching via Capability Estimation for Efficient Reasoning
Computation and Language
Saves computer power by thinking less.