Breaking the MoE LLM Trilemma: Dynamic Expert Clustering with Structured Compression
By: Peijun Zhu , Ning Yang , Jiayu Wei and more
Potential Business Impact:
Makes AI smarter, faster, and use less memory.
Mixture-of-Experts (MoE) Large Language Models (LLMs) face a trilemma of load imbalance, parameter redundancy, and communication overhead. We introduce a unified framework based on dynamic expert clustering and structured compression to address these issues cohesively. Our method employs an online clustering procedure that periodically regroups experts using a fused metric of parameter and activation similarity, which stabilizes expert utilization. To our knowledge, this is one of the first frameworks to leverage the semantic embedding capability of the router to dynamically reconfigure the model's architecture during training for substantial efficiency gains. Within each cluster, we decompose expert weights into a shared base matrix and extremely low-rank residual adapters, achieving up to fivefold parameter reduction per group while preserving specialization. This structure enables a two-stage hierarchical routing strategy: tokens are first assigned to a cluster, then to specific experts within it, drastically reducing the routing search space and the volume of all-to-all communication. Furthermore, a heterogeneous precision scheme, which stores shared bases in FP16 and residual factors in INT4, coupled with dynamic offloading of inactive clusters, reduces peak memory consumption to levels comparable to dense models. Evaluated on GLUE and WikiText-103, our framework matches the quality of standard MoE models while reducing total parameters by approximately 80%, improving throughput by 10% to 20%, and lowering expert load variance by a factor of over three. Our work demonstrates that structural reorganization is a principled path toward scalable, efficient, and memory-effective MoE LLMs.
Similar Papers
ReXMoE: Reusing Experts with Minimal Overhead in Mixture-of-Experts
Computation and Language
Makes smart computer programs learn better and faster.
CoMoE: Collaborative Optimization of Expert Aggregation and Offloading for MoE-based LLMs at Edge
Networking and Internet Architecture
Makes big AI models fit on phones.
ElasticMoE: An Efficient Auto Scaling Method for Mixture-of-Experts Models
Distributed, Parallel, and Cluster Computing
Lets big AI models grow and shrink instantly.