Staggered Batch Scheduling: Co-optimizing Time-to-First-Token and Throughput for High-Efficiency LLM Inference
By: Jian Tian , Shuailong Li , Yang Cao and more
The evolution of Large Language Model (LLM) serving towards complex, distributed architectures--specifically the P/D-separated, large-scale DP+EP paradigm--introduces distinct scheduling challenges. Unlike traditional deployments where schedulers can treat instances as black boxes, DP+EP architectures exhibit high internal synchronization costs. We identify that immediate request dispatching in such systems leads to severe in-engine queuing and parallelization bubbles, degrading Time-to-First-Token (TTFT). To address this, we propose Staggered Batch Scheduling (SBS), a mechanism that deliberately buffers requests to form optimal execution batches. This temporal decoupling eliminates internal queuing bubbles without compromising throughput. Furthermore, leveraging the scheduling window created by buffering, we introduce a Load-Aware Global Allocation strategy that balances computational load across DP units for both Prefill and Decode phases. Deployed on a production H800 cluster serving Deepseek-V3, our system reduces TTFT by 30%-40% and improves throughput by 15%-20% compared to state-of-the-art immediate scheduling baselines.
Similar Papers
FairBatching: Fairness-Aware Batch Formation for LLM Inference
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and fairer.
From Tokens to Layers: Redefining Stall-Free Scheduling for LLM Serving with Layered Prefill
Machine Learning (CS)
Makes AI faster and use less power.
Optimal Scheduling Algorithms for LLM Inference: Theory and Practice
Machine Learning (CS)
Makes AI answer questions much faster.