Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations
By: Ajay Jaiswal , Jianyu Wang , Yixiao Li and more
Potential Business Impact:
Makes smart computer programs smaller and faster.
Sparsely activated Mixture-of-Experts (SMoE) has shown promise in scaling up the learning capacity of neural networks. However, vanilla SMoEs have issues such as expert redundancy and heavy memory requirements, making them inefficient and non-scalable, especially for resource-constrained scenarios. Expert-level sparsification of SMoEs involves pruning the least important experts to address these limitations. In this work, we aim to address three questions: (1) What is the best recipe to identify the least knowledgeable subset of experts that can be dropped with minimal impact on performance? (2) How should we perform expert dropping (one-shot or iterative), and what correction measures can we undertake to minimize its drastic impact on SMoE subnetwork capabilities? (3) What capabilities of full-SMoEs are severely impacted by the removal of the least dominant experts, and how can we recover them? Firstly, we propose MoE Experts Compression Suite (MC-Suite), which is a collection of some previously explored and multiple novel recipes to provide a comprehensive benchmark for estimating expert importance from diverse perspectives, as well as unveil numerous valuable insights for SMoE experts. Secondly, unlike prior works with a one-shot expert pruning approach, we explore the benefits of iterative pruning with the re-estimation of the MC-Suite criterion. Moreover, we introduce the benefits of task-agnostic fine-tuning as a correction mechanism during iterative expert dropping, which we term MoE Lottery Subnetworks. Lastly, we present an experimentally validated conjecture that, during expert dropping, SMoEs' instruction-following capabilities are predominantly hurt, which can be restored to a robust level subject to external augmentation of instruction-following capabilities using k-shot examples and supervised fine-tuning.
Similar Papers
Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations
Computation and Language
Makes AI models smaller, faster, and just as smart.
Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts
Machine Learning (CS)
Makes AI models run much faster and smarter.
A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications
Machine Learning (CS)
Makes smart computer programs use less power.