Dense Backpropagation Improves Training for Sparse Mixture-of-Experts
By: Ashwinee Panda , Vatsal Baherwani , Zain Sarwar and more
Potential Business Impact:
Makes AI learn better, faster, and more stable.
Mixture of Experts (MoE) pretraining is more scalable than dense Transformer pretraining, because MoEs learn to route inputs to a sparse set of their feedforward parameters. However, this means that MoEs only receive a sparse backward update, leading to training instability and suboptimal performance. We present a lightweight approximation method that gives the MoE router a dense gradient update while continuing to sparsely activate its parameters. Our method, which we refer to as Default MoE, substitutes missing expert activations with default outputs consisting of an exponential moving average of expert outputs previously seen over the course of training. This allows the router to receive signals from every expert for each token, leading to significant improvements in training performance. Our Default MoE outperforms standard TopK routing in a variety of settings without requiring significant computational overhead. Code: https://github.com/vatsal0/default-moe.
Similar Papers
Mixture of Group Experts for Learning Invariant Representations
Machine Learning (CS)
Makes AI smarter by teaching experts to work together.
Load Balancing Mixture of Experts with Similarity Preserving Routers
Machine Learning (CS)
Makes AI learn faster and smarter.
DualSparse-MoE: Coordinating Tensor/Neuron-Level Sparsity with Expert Partition and Reconstruction
Machine Learning (CS)
Makes smart computer programs run faster and better.