Score: 0

Enabling MoE on the Edge via Importance-Driven Expert Scheduling

Published: August 26, 2025 | arXiv ID: 2508.18983v1

By: Guoying Zhu , Meng Li , Haipeng Dai and more

Potential Business Impact:

Makes smart computer programs run faster on phones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The Mixture of Experts (MoE) architecture has emerged as a key technique for scaling Large Language Models by activating only a subset of experts per query. Deploying MoE on consumer-grade edge hardware, however, is constrained by limited device memory, making dynamic expert offloading essential. Unlike prior work that treats offloading purely as a scheduling problem, we leverage expert importance to guide decisions, substituting low-importance activated experts with functionally similar ones already cached in GPU memory, thereby preserving accuracy. As a result, this design reduces memory usage and data transfer, while largely eliminating PCIe overhead. In addition, we introduce a scheduling policy that maximizes the reuse ratio of GPU-cached experts, further boosting efficiency. Extensive evaluations show that our approach delivers 48% lower decoding latency with over 60% expert cache hit rate, while maintaining nearly lossless accuracy.

Country of Origin
🇨🇳 China

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence