Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism
By: Xinglin Pan , Shaohuai Shi , Wenxiang Lin and more
The mixture-of-experts (MoE) architecture scales model size with sublinear computational increase but suffers from memory-intensive inference due to KV caches and sparse expert activation. Recent disaggregated expert parallelism (DEP) distributes attention and experts to dedicated GPU groups but lacks support for shared experts and efficient task scheduling, limiting performance. We propose FinDEP, a fine-grained task scheduling algorithm for DEP that maximizes task overlap to improve MoE inference throughput. FinDEP introduces three innovations: 1) partitioning computation/communication into smaller tasks for fine-grained pipelining, 2) formulating a scheduling optimization supporting variable granularity and ordering, and 3) developing an efficient solver for this large search space. Experiments on four GPU systems with DeepSeek-V2 and Qwen3-MoE show FinDEP improves throughput by up to 1.61x over prior methods, achieving up to 1.24x speedup on a 32-GPU system.
Similar Papers
MemFine: Memory-Aware Fine-Grained Scheduling for MoE Training
Distributed, Parallel, and Cluster Computing
Trains big AI models on less computer memory.
ExpertFlow: Adaptive Expert Scheduling and Memory Coordination for Efficient MoE Inference
Distributed, Parallel, and Cluster Computing
Makes AI models run faster on less memory.
MicroMoE: Fine-Grained Load Balancing for Mixture-of-Experts with Token Scheduling
Distributed, Parallel, and Cluster Computing
Makes AI learn faster by balancing computer work.