Stable-MoE: Lyapunov-based Token Routing for Distributed Mixture-of-Experts Training over Edge Networks
By: Long Shi , Bingyan Ou , Kang Wei and more
Potential Business Impact:
Makes smart devices learn faster with less power.
The sparse activation mechanism of mixture of experts (MoE) model empowers edge intelligence with enhanced training efficiency and reduced computational resource consumption. However, traditional token routing in distributed MoE training faces significant challenges in resource-constrained edge networks characterized by heterogeneous computing capabilities and stochastic token arrivals, which inevitably suffer from workload backlog, resource inefficiency, and performance degradation. To address this issue, we propose a novel Lyapunov-based token routing framework for distributed MoE training over resource-heterogeneous edge networks, termed Stable-MoE. Specifically, we formulate a stochastic optimization problem to maximize both system throughput and gating consistency via optimizing the token routing strategy and computational resource allocation, while ensuring long-term stability of both token and energy queues at the edge devices. Using the Lyapunov optimization, we transform the intractable long-term optimization problem into tractable per-slot subproblems by enabling online decision-making of token routing and computation frequency utilization without the knowledge of future system states. Experimental results on the SVHN and CIFAR-100 datasets demonstrate that Stable-MoE outperforms the baselines with at least 40% and 5% gains in system throughput and test accuracy, respectively.
Similar Papers
Load Balancing Mixture of Experts with Similarity Preserving Routers
Machine Learning (CS)
Makes AI learn faster and smarter.
From Score Distributions to Balance: Plug-and-Play Mixture-of-Experts Routing
Machine Learning (CS)
Makes AI faster and cheaper by sharing work.
Bayesian Mixture-of-Experts: Towards Making LLMs Know What They Don't Know
Machine Learning (CS)
Makes AI know when it's unsure.