TokenScale: Timely and Accurate Autoscaling for Disaggregated LLM Serving with Token Velocity
By: Ruiqi Lai , Hongrui Liu , Chengzhi Lu and more
Potential Business Impact:
Makes AI answer questions much faster.
The architectural shift to prefill/decode (PD) disaggregation in LLM serving improves resource utilization but struggles with the bursty nature of modern workloads. Existing autoscaling policies, often retrofitted from monolithic systems like those in AIBrix and DistServe, rely on lagging indicators such as GPU utilization or coarse-grained request counts. This results in slow reactions to load spikes, leading to significant Time-to First-Token (TTFT) and Time-Per-Output-Token (TPOT) SLO violations and costly over-provisioning. We introduce TokenScale, an autoscaling framework that resolves this performance mismatch through two innovations. First, we propose Token Velocity, a novel metric that unifies the prefill, network, and decode stages by quantifying their rate of work. As a leading indicator of system backpressure, it enables proactive scaling. Second, Convertible Decoders allow decoder GPUs to dynamically execute prefill tasks during traffic spikes, creating a rapid-response buffer that absorbs bursts and eliminates the initialization latency of new prefillers. Our evaluation on a GPU cluster with production traces shows TokenScale improves SLO attainment from 50-88% to 80-96% and reduces costs by 4-14% over state-of-the-art systems, including DistServe, BlitzScale, and AIBrix. By uniting a predictive metric with a flexible system design, TokenScale significantly boosts the performance and efficiency of disaggregated LLM serving infrastructure.
Similar Papers
A Dynamic PD-Disaggregation Architecture for Maximizing Goodput in LLM Inference Serving
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and more reliably.
Taming the Chaos: Coordinated Autoscaling for Heterogeneous and Disaggregated LLM Inference
Distributed, Parallel, and Cluster Computing
Makes AI models run faster and cheaper.
DOPO: A Dynamic PD-Disaggregation Architecture for Maximizing Goodput in LLM Inference Serving
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and more reliably.