Score: 0

Unveiling Hidden Collaboration within Mixture-of-Experts in Large Language Models

Published: April 16, 2025 | arXiv ID: 2504.12359v1

By: Yuanbo Tang , Yan Tang , Naifan Zhang and more

Potential Business Impact:

Makes AI smarter by teaching experts to work together.

Business Areas:
Crowdsourcing Collaboration

Mixture-of-Experts based large language models (MoE LLMs) have shown significant promise in multitask adaptability by dynamically routing inputs to specialized experts. Despite their success, the collaborative mechanisms among experts are still not well understood, limiting both the interpretability and optimization of these models. In this paper, we focus on two critical issues: (1) identifying expert collaboration patterns, and (2) optimizing MoE LLMs through expert pruning. To address the first issue, we propose a hierarchical sparse dictionary learning (HSDL) method that uncovers the collaboration patterns among experts. For the second issue, we introduce the Contribution-Aware Expert Pruning (CAEP) algorithm, which effectively prunes low-contribution experts. Our extensive experiments demonstrate that expert collaboration patterns are closely linked to specific input types and exhibit semantic significance across various tasks. Moreover, pruning experiments show that our approach improves overall performance by 2.5\% on average, outperforming existing methods. These findings offer valuable insights into enhancing the efficiency and interpretability of MoE LLMs, offering a clearer understanding of expert interactions and improving model optimization.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)