Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations
By: Zican Dong , Han Peng , Peiyu Liu and more
Potential Business Impact:
Makes AI models smaller, faster, and just as smart.
Mixture-of-Experts (MoE) models achieve a favorable trade-off between performance and inference efficiency by activating only a subset of experts. However, the memory overhead of storing all experts remains a major limitation, especially in large-scale MoE models such as DeepSeek-R1(671B). In this study, we investigate domain specialization and expert redundancy in large-scale MoE models and uncover a consistent behavior we term few-shot expert localization, with only a few in-domain demonstrations, the model consistently activates a sparse and stable subset of experts on tasks within the same domain. Building on this observation, we propose a simple yet effective pruning framework, EASY-EP, that leverages a few domain-specific demonstrations to identify and retain only the most relevant experts. EASY-EP comprises two key components: output-aware expert importance assessment and expert-level token contribution estimation. The former evaluates the importance of each expert for the current token by considering the gating scores and L2 norm of the outputs of activated experts, while the latter assesses the contribution of tokens based on representation similarities before and after routed experts. Experiments on DeepSeek-R1 and DeepSeek-V3-0324 show that our method can achieve comparable performances and $2.99\times$ throughput under the same memory budget with full model with only half the experts.
Similar Papers
Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language Models
Computation and Language
Makes big AI models smaller and faster.
Faster MoE LLM Inference for Extremely Large Models
Computation and Language
Makes AI faster by using fewer parts.
Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations
Machine Learning (CS)
Makes smart computer programs smaller and faster.