A Scheduling Framework for Efficient MoE Inference on Edge GPU-NDP Systems
By: Qi Wu , Chao Fang , Jiayuan Chen and more
Potential Business Impact:
Makes smart AI run faster on small devices.
Mixture-of-Experts (MoE) models facilitate edge deployment by decoupling model capacity from active computation, yet their large memory footprint drives the need for GPU systems with near-data processing (NDP) capabilities that offload experts to dedicated processing units. However, deploying MoE models on such edge-based GPU-NDP systems faces three critical challenges: 1) severe load imbalance across NDP units due to non-uniform expert selection and expert parallelism, 2) insufficient GPU utilization during expert computation within NDP units, and 3) extensive data pre-profiling necessitated by unpredictable expert activation patterns for pre-fetching. To address these challenges, this paper proposes an efficient inference framework featuring three key optimizations. First, the underexplored tensor parallelism in MoE inference is exploited to partition and compute large expert parameters across multiple NDP units simultaneously towards edge low-batch scenarios. Second, a load-balancing-aware scheduling algorithm distributes expert computations across NDP units and GPU to maximize resource utilization. Third, a dataset-free pre-fetching strategy proactively loads frequently accessed experts to minimize activation delays. Experimental results show that our framework enables GPU-NDP systems to achieve 2.41x on average and up to 2.56x speedup in end-to-end latency compared to state-of-the-art approaches, significantly enhancing MoE inference efficiency in resource-constrained environments.
Similar Papers
Context-Aware Mixture-of-Experts Inference on CXL-Enabled GPU-NDP Systems
Machine Learning (CS)
Makes AI models run faster and smarter.
OD-MoE: On-Demand Expert Loading for Cacheless Edge-Distributed MoE Inference
Distributed, Parallel, and Cluster Computing
Lets small computers run big AI models.
ExpertFlow: Adaptive Expert Scheduling and Memory Coordination for Efficient MoE Inference
Distributed, Parallel, and Cluster Computing
Makes AI models run faster on less memory.