Routing by Analogy: kNN-Augmented Expert Assignment for Mixture-of-Experts
By: Boxuan Lyu , Soichiro Murakami , Hidetaka Kamigaito and more
Potential Business Impact:
AI learns better by remembering past answers.
Mixture-of-Experts (MoE) architectures scale large language models efficiently by employing a parametric "router" to dispatch tokens to a sparse subset of experts. Typically, this router is trained once and then frozen, rendering routing decisions brittle under distribution shifts. We address this limitation by introducing kNN-MoE, a retrieval-augmented routing framework that reuses optimal expert assignments from a memory of similar past cases. This memory is constructed offline by directly optimizing token-wise routing logits to maximize the likelihood on a reference set. Crucially, we use the aggregate similarity of retrieved neighbors as a confidence-driven mixing coefficient, thus allowing the method to fall back to the frozen router when no relevant cases are found. Experiments show kNN-MoE outperforms zero-shot baselines and rivals computationally expensive supervised fine-tuning.
Similar Papers
Tight Clusters Make Specialized Experts
Machine Learning (CS)
Makes AI learn faster and better.
Multilingual Routing in Mixture-of-Experts
Computation and Language
Makes AI understand many languages better.
Probing Semantic Routing in Large Mixture-of-Expert Models
Machine Learning (CS)
AI learns meaning to pick the right thinking part.