MoE Pathfinder: Trajectory-driven Expert Pruning
By: Xican Yang, Yuanhe Tian, Yan Song
Mixture-of-experts (MoE) architectures used in large language models (LLMs) achieve state-of-the-art performance across diverse tasks yet face practical challenges such as deployment complexity and low activation efficiency. Expert pruning has thus emerged as a promising solution to reduce computational overhead and simplify the deployment of MoE models. However, existing expert pruning approaches conventionally rely on local importance metrics and often apply uniform layer-wise pruning, leveraging only partial evaluation signals and overlooking the heterogeneous contributions of experts across layers. To address these limitations, we propose an expert pruning approach based on the trajectory of activated experts across layers, which treats MoE as a weighted computation graph and casts expert selection as a global optimal path planning problem. Within this framework, we integrate complementary importance signals from reconstruction error, routing probabilities, and activation strength at the trajectory level, which naturally yields non-uniform expert retention across layers. Experiments show that our approach achieves superior pruning performance on nearly all tasks compared with most existing approaches.
Similar Papers
Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language Models
Computation and Language
Makes big AI models smaller and faster.
ToMoE: Converting Dense Large Language Models to Mixture-of-Experts through Dynamic Structural Pruning
Machine Learning (CS)
Makes smart computer programs smaller and faster.
REAP the Experts: Why Pruning Prevails for One-Shot MoE compression
Machine Learning (CS)
Makes AI models smaller without losing smarts.