Score: 0

Towards Principled Design of Mixture-of-Experts Language Models under Memory and Inference Constraints

Published: January 13, 2026 | arXiv ID: 2601.08215v1

By: Seng Pei Liew, Kenta Shinzato, Yuyang Dong

Potential Business Impact:

Makes AI smarter by changing how it learns.

Business Areas:
A/B Testing Data and Analytics

Modern Mixture-of-Experts (MoE) language models are designed based on total parameters (memory footprint) and active parameters (inference cost). However, we find these two factors alone are insufficient to describe an optimal architecture. Through a systematic study, we demonstrate that MoE performance is primarily determined by total parameters ($N_{total}$) and expert sparsity ($s:=n_{exp}/n_{topk}$). Moreover, $n_{exp}$ and $n_{topk}$ do not "cancel out" within the sparsity ratio; instead, a larger total number of experts slightly penalizes performance by forcing a reduction in core model dimensions (depth and width) to meet memory constraints. This motivates a simple principle for MoE design which maximizes $N_{total}$ while minimizing $s$ (maximizing $n_{topk}$) and $n_{exp}$ under the given constraints. Our findings provide a robust framework for resolving architectural ambiguity and guiding MoE design.

Page Count
10 pages

Category
Computer Science:
Computation and Language