Unveiling Hidden Collaboration within Mixture-of-Experts in Large Language Models
By: Yuanbo Tang , Yan Tang , Naifan Zhang and more
Potential Business Impact:
Makes AI smarter by teaching experts to work together.
Mixture-of-Experts based large language models (MoE LLMs) have shown significant promise in multitask adaptability by dynamically routing inputs to specialized experts. Despite their success, the collaborative mechanisms among experts are still not well understood, limiting both the interpretability and optimization of these models. In this paper, we focus on two critical issues: (1) identifying expert collaboration patterns, and (2) optimizing MoE LLMs through expert pruning. To address the first issue, we propose a hierarchical sparse dictionary learning (HSDL) method that uncovers the collaboration patterns among experts. For the second issue, we introduce the Contribution-Aware Expert Pruning (CAEP) algorithm, which effectively prunes low-contribution experts. Our extensive experiments demonstrate that expert collaboration patterns are closely linked to specific input types and exhibit semantic significance across various tasks. Moreover, pruning experiments show that our approach improves overall performance by 2.5\% on average, outperforming existing methods. These findings offer valuable insights into enhancing the efficiency and interpretability of MoE LLMs, offering a clearer understanding of expert interactions and improving model optimization.
Similar Papers
Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language Models
Computation and Language
Makes big AI models smaller and faster.
MoECollab: Democratizing LLM Development Through Collaborative Mixture of Experts
Machine Learning (CS)
Lets many people build smarter AI together.
Breaking the MoE LLM Trilemma: Dynamic Expert Clustering with Structured Compression
Computation and Language
Makes AI smarter, faster, and use less memory.