MergeMoE: Efficient Compression of MoE Models via Expert Output Merging
By: Ruijie Miao , Yilun Yao , Zihan Wang and more
Potential Business Impact:
Makes big AI models smaller without losing smarts.
The Mixture-of-Experts (MoE) technique has proven to be a promising solution to efficiently scale the model size, which has been widely applied in recent LLM advancements. However, the substantial memory overhead of MoE models has made their compression an important research direction. In this work, we provide a theoretical analysis of expert merging, a recently proposed technique for compressing MoE models. Rather than interpreting expert merging from the conventional perspective of parameter aggregation, we approach it from the perspective of merging experts' outputs. Our key insight is that the merging process can be interpreted as inserting additional matrices into the forward computation, which naturally leads to an optimization formulation. Building on this analysis, we introduce MergeMoE, a method that leverages mathematical optimization to construct the compression matrices. We evaluate MergeMoE on multiple MoE models and show that our algorithm consistently outperforms the baselines with the same compression ratios.
Similar Papers
PuzzleMoE: Efficient Compression of Large Mixture-of-Experts Models via Sparse Expert Merging and Bit-packed inference
Machine Learning (CS)
Makes smart computer programs smaller and faster.
Efficiently Editing Mixture-of-Experts Models with Compressed Experts
Machine Learning (CS)
Makes AI smarter and faster using less power.
MoBE: Mixture-of-Basis-Experts for Compressing MoE-based LLMs
Machine Learning (CS)
Makes big AI models smaller without losing smarts.