Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
By: Taishi Nakamura , Satoki Ishikawa , Masaki Kawamura and more
Potential Business Impact:
Makes AI better at thinking, not just remembering.
Empirical scaling laws have driven the evolution of large language models (LLMs), yet their coefficients shift whenever the model architecture or data pipeline changes. Mixture-of-Experts (MoE) models, now standard in state-of-the-art systems, introduce a new sparsity dimension that current dense-model frontiers overlook. We investigate how MoE sparsity influences two distinct capability regimes: memorization and reasoning. We train families of MoE Transformers that systematically vary total parameters, active parameters, and top-$k$ routing while holding the compute budget fixed. For every model we record pre-training loss, downstream task loss, and task accuracy, allowing us to separate the train-test generalization gap from the loss-accuracy gap. Memorization benchmarks improve monotonically with total parameters, mirroring training loss. By contrast, reasoning performance saturates and can even regress despite continued gains in both total parameters and training loss. Altering top-$k$ alone has little effect when active parameters are constant, and classic hyperparameters such as learning rate and initialization modulate the generalization gap in the same direction as sparsity. Neither post-training reinforcement learning (GRPO) nor extra test-time compute rescues the reasoning deficit of overly sparse models. Our model checkpoints, code and logs are open-source at https://github.com/rioyokotalab/optimal-sparsity.
Similar Papers
DualSparse-MoE: Coordinating Tensor/Neuron-Level Sparsity with Expert Partition and Reconstruction
Machine Learning (CS)
Makes smart computer programs run faster and better.
Mixture of Group Experts for Learning Invariant Representations
Machine Learning (CS)
Makes AI smarter by teaching experts to work together.
Faster MoE LLM Inference for Extremely Large Models
Computation and Language
Makes AI faster by using fewer parts.