FairBatching: Fairness-Aware Batch Formation for LLM Inference
By: Hongtao Lyu , Boyue Liu , Mingyu Wu and more
Potential Business Impact:
Makes AI answer questions faster and fairer.
Large language model (LLM) inference systems face a fundamental tension between minimizing Time-to-First-Token (TTFT) latency for new requests and maintaining a high, steady token generation rate (low Time-Per-Output-Token, or TPOT) for ongoing requests. Existing stall-free batching schedulers proposed by Sarathi, while effective at preventing decode stalls, introduce significant computational unfairness. They prioritize decode tasks excessively, simultaneously leading to underutilized decode slack and unnecessary prefill queuing delays, which collectively degrade the system's overall quality of service (QoS). This work identifies the root cause of this unfairness: the non-monotonic nature of Time-Between-Tokens (TBT) as a scheduling metric and the rigid decode-prioritizing policy that fails to adapt to dynamic workload bursts. We therefore propose FairBatching, a novel LLM inference scheduler that enforces fair resource allocation between prefill and decode tasks. It features an adaptive batch capacity determination mechanism, which dynamically adjusts the computational budget to improve the GPU utilization without triggering SLO violations. Its fair and dynamic batch formation algorithm breaks away from the decode-prioritizing paradigm, allowing computation resources to be reclaimed from bursting decode tasks to serve prefill surges, achieving global fairness. Furthermore, FairBatching provides a novel load estimation method, enabling more effective coordination with upper-level schedulers. Implemented and evaluated on realistic traces, FairBatching significantly reduces TTFT tail latency by up to 2.29x while robustly maintaining TPOT SLOs, achieving overall 20.0% improvement in single-node capacity and 54.3% improvement in cluster-level capacity.
Similar Papers
From Tokens to Layers: Redefining Stall-Free Scheduling for LLM Serving with Layered Prefill
Machine Learning (CS)
Makes AI faster and use less power.
TokenScale: Timely and Accurate Autoscaling for Disaggregated LLM Serving with Token Velocity
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster.
Equinox: Holistic Fair Scheduling in Serving Large Language Models
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and fairer.