Score: 4

Orders in Chaos: Enhancing Large-Scale MoE LLM Serving with Data Movement Forecasting

Published: October 7, 2025 | arXiv ID: 2510.05497v1

By: Zhongkai Yu , Yue Guan , Zihao Yu and more

BigTech Affiliations: Samsung NVIDIA

Potential Business Impact:

Makes AI models run much faster and smoother.

Business Areas:
A/B Testing Data and Analytics

Large Language Models (LLMs) with Mixture of Experts (MoE) architectures achieve remarkable performance improvements, but their random expert selection mechanism introduces significant data movement overhead that becomes the dominant bottleneck in multi-unit serving systems. To forecast the patterns underlying this data movement, we conduct comprehensive data-movement-centric profiling across three state-of-the-art large-scale MoE models (200B- 671B) using over 24,000 requests spanning diverse workloads. With the resulting 150GB+ trace files, we perform systematic analysis from both temporal and spatial perspectives and distill six key insights to guide the design of diverse future serving systems. Taking wafer-scale GPUs as a case study, we demonstrate that minor architectural modifications leveraging our insights achieve substantial performance gains, delivering 6.3X and 4.0X average speedups on DeepSeek V3 and Qwen3, respectively. Our work provides the first comprehensive data-centric analysis of MoE models at scale. Our profiling traces and analysis results are publicly available at {https://huggingface.co/datasets/core12345/MoE_expert_selection_trace. We will also release our simulation framework shortly to facilitate future research in this area.

Country of Origin
πŸ‡°πŸ‡· πŸ‡ΊπŸ‡Έ United States, South Korea

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing