$μ$-Parametrization for Mixture of Experts
By: Jan Małaśnicki , Kamil Ciebiera , Mateusz Boruń and more
Potential Business Impact:
Makes big computer brains learn better and faster.
Recent years have seen a growing interest and adoption of LLMs, with $\mu$Transfer becoming a key technique for tuning hyperparameters in large-scale training. Meanwhile, Mixture-of-Experts (MoE) has emerged as a leading architecture in extremely large models. However, the intersection of these two advancements has remained unexplored. In this work, we derive a $\mu$-Parameterization ($\mu$P) for MoE, providing theoretical guarantees for feature learning across model widths in both the router and experts. We empirically validate our parameterization and further investigate how scaling the number of experts and granularity affects the optimal learning rate.
Similar Papers
Breaking the MoE LLM Trilemma: Dynamic Expert Clustering with Structured Compression
Computation and Language
Makes AI smarter, faster, and use less memory.
Bayesian Mixture of Experts For Large Language Models
Machine Learning (CS)
Helps AI know when it's unsure about answers.
ReXMoE: Reusing Experts with Minimal Overhead in Mixture-of-Experts
Computation and Language
Makes smart computer programs learn better and faster.