Score: 0

Dense Backpropagation Improves Training for Sparse Mixture-of-Experts

Published: April 16, 2025 | arXiv ID: 2504.12463v2

By: Ashwinee Panda , Vatsal Baherwani , Zain Sarwar and more

Potential Business Impact:

Makes AI learn better, faster, and more stable.

Business Areas:
A/B Testing Data and Analytics

Mixture of Experts (MoE) pretraining is more scalable than dense Transformer pretraining, because MoEs learn to route inputs to a sparse set of their feedforward parameters. However, this means that MoEs only receive a sparse backward update, leading to training instability and suboptimal performance. We present a lightweight approximation method that gives the MoE router a dense gradient update while continuing to sparsely activate its parameters. Our method, which we refer to as Default MoE, substitutes missing expert activations with default outputs consisting of an exponential moving average of expert outputs previously seen over the course of training. This allows the router to receive signals from every expert for each token, leading to significant improvements in training performance. Our Default MoE outperforms standard TopK routing in a variety of settings without requiring significant computational overhead. Code: https://github.com/vatsal0/default-moe.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)