Score: 0

Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations

Published: April 9, 2025 | arXiv ID: 2504.06792v2

By: Zican Dong , Han Peng , Peiyu Liu and more

Potential Business Impact:

Makes AI models smaller, faster, and just as smart.

Business Areas:
A/B Testing Data and Analytics

Mixture-of-Experts (MoE) models achieve a favorable trade-off between performance and inference efficiency by activating only a subset of experts. However, the memory overhead of storing all experts remains a major limitation, especially in large-scale MoE models such as DeepSeek-R1(671B). In this study, we investigate domain specialization and expert redundancy in large-scale MoE models and uncover a consistent behavior we term few-shot expert localization, with only a few in-domain demonstrations, the model consistently activates a sparse and stable subset of experts on tasks within the same domain. Building on this observation, we propose a simple yet effective pruning framework, EASY-EP, that leverages a few domain-specific demonstrations to identify and retain only the most relevant experts. EASY-EP comprises two key components: output-aware expert importance assessment and expert-level token contribution estimation. The former evaluates the importance of each expert for the current token by considering the gating scores and L2 norm of the outputs of activated experts, while the latter assesses the contribution of tokens based on representation similarities before and after routed experts. Experiments on DeepSeek-R1 and DeepSeek-V3-0324 show that our method can achieve comparable performances and $2.99\times$ throughput under the same memory budget with full model with only half the experts.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Computation and Language