L4: Low-Latency and Load-Balanced LLM Serving via Length-Aware Scheduling
By: Yitao Yuan , Chenqi Zhao , Bohan Zhao and more
Efficiently harnessing GPU compute is critical to improving user experience and reducing operational costs in large language model (LLM) services. However, current inference engine schedulers overlook the attention backend's sensitivity to request-length heterogeneity within a batch. As state-of-the-art models now support context windows exceeding 128K tokens, this once-tolerable inefficiency has escalated into a primary system bottleneck, causing severe performance degradation through GPU underutilization and increased latency. We present L4, a runtime system that dynamically reschedules requests across multiple instances serving the same LLM to mitigate per-instance length heterogeneity. L4 partitions these instances into length-specialized groups, each handling requests within a designated length range, naturally forming a pipeline as requests flow through them. L4 devises a dynamic programming algorithm to efficiently find the stage partition with the best QoE, employs runtime range refinement together with decentralized load (re)balance both across and within groups, achieving a balanced and efficient multi-instance service. Our evaluation shows that, under the same configuration, L4 reduces end-to-end latency by up to 67% and tail latency by up to 69%, while improving overall system throughput by up to 2.89 times compared to the state-of-the-art multi-instance scheduling systems.
Similar Papers
Staggered Batch Scheduling: Co-optimizing Time-to-First-Token and Throughput for High-Efficiency LLM Inference
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster.
Adaptively Robust LLM Inference Optimization under Prediction Uncertainty
Machine Learning (CS)
Makes AI faster and use less power.
Adaptively Robust LLM Inference Optimization under Prediction Uncertainty
Machine Learning (CS)
Makes AI answer questions faster and use less power.