FLEX-MoE: Federated Mixture-of-Experts with Load-balanced Expert Assignment
By: Boyang Zhang , Xiaobing Chen , Songyang Zhang and more
Mixture-of-Experts (MoE) models enable scalable neural networks through conditional computation. However, their deployment with federated learning (FL) faces two critical challenges: 1) resource-constrained edge devices cannot store full expert sets, and 2) non-IID data distributions cause severe expert load imbalance that degrades model performance. To this end, we propose \textbf{FLEX-MoE}, a novel federated MoE framework that jointly optimizes expert assignment and load balancing under limited client capacity. Specifically, our approach introduces client-expert fitness scores that quantify the expert suitability for local datasets through training feedback, and employs an optimization-based algorithm to maximize client-expert specialization while enforcing balanced expert utilization system-wide. Unlike existing greedy methods that focus solely on personalization while ignoring load imbalance, our FLEX-MoE is capable of addressing the expert utilization skew, which is particularly severe in FL settings with heterogeneous data. Our comprehensive experiments on three different datasets demonstrate the superior performance of the proposed FLEX-MoE, together with its ability to maintain balanced expert utilization across diverse resource-constrained scenarios.
Similar Papers
Efficient Training of Large-Scale AI Models Through Federated Mixture-of-Experts: A System-Level Approach
Machine Learning (CS)
Trains smarter AI faster on many computers.
Unlocking Personalized Knowledge in Federated Large Language Model: The Power of Mixture of Experts
Artificial Intelligence
Helps AI learn from many people without sharing private data.
Towards Efficient Federated Learning of Networked Mixture-of-Experts for Mobile Edge Computing
Machine Learning (CS)
Lets phones learn from each other without sharing data.