Orders in Chaos: Enhancing Large-Scale MoE LLM Serving with Data Movement Forecasting
By: Zhongkai Yu , Yue Guan , Zihao Yu and more
Potential Business Impact:
Makes AI models run much faster and smoother.
Large Language Models (LLMs) with Mixture of Experts (MoE) architectures achieve remarkable performance improvements, but their random expert selection mechanism introduces significant data movement overhead that becomes the dominant bottleneck in multi-unit serving systems. To forecast the patterns underlying this data movement, we conduct comprehensive data-movement-centric profiling across three state-of-the-art large-scale MoE models (200B- 671B) using over 24,000 requests spanning diverse workloads. With the resulting 150GB+ trace files, we perform systematic analysis from both temporal and spatial perspectives and distill six key insights to guide the design of diverse future serving systems. Taking wafer-scale GPUs as a case study, we demonstrate that minor architectural modifications leveraging our insights achieve substantial performance gains, delivering 6.3X and 4.0X average speedups on DeepSeek V3 and Qwen3, respectively. Our work provides the first comprehensive data-centric analysis of MoE models at scale. Our profiling traces and analysis results are publicly available at {https://huggingface.co/datasets/core12345/MoE_expert_selection_trace. We will also release our simulation framework shortly to facilitate future research in this area.
Similar Papers
Faster MoE LLM Inference for Extremely Large Models
Computation and Language
Makes AI faster by using fewer parts.
MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models
Machine Learning (CS)
Makes AI smarter and faster by using many smart parts.
ElasticMoE: An Efficient Auto Scaling Method for Mixture-of-Experts Models
Distributed, Parallel, and Cluster Computing
Lets big AI models grow and shrink instantly.