Score: 0

Routing by Analogy: kNN-Augmented Expert Assignment for Mixture-of-Experts

Published: January 5, 2026 | arXiv ID: 2601.02144v1

By: Boxuan Lyu , Soichiro Murakami , Hidetaka Kamigaito and more

Potential Business Impact:

AI learns better by remembering past answers.

Business Areas:
A/B Testing Data and Analytics

Mixture-of-Experts (MoE) architectures scale large language models efficiently by employing a parametric "router" to dispatch tokens to a sparse subset of experts. Typically, this router is trained once and then frozen, rendering routing decisions brittle under distribution shifts. We address this limitation by introducing kNN-MoE, a retrieval-augmented routing framework that reuses optimal expert assignments from a memory of similar past cases. This memory is constructed offline by directly optimizing token-wise routing logits to maximize the likelihood on a reference set. Crucially, we use the aggregate similarity of retrieved neighbors as a confidence-driven mixing coefficient, thus allowing the method to fall back to the frozen router when no relevant cases are found. Experiments show kNN-MoE outperforms zero-shot baselines and rivals computationally expensive supervised fine-tuning.

Page Count
12 pages

Category
Computer Science:
Computation and Language