SLO-Aware Scheduling for Large Language Model Inferences
By: Jinqi Huang , Yi Xiong , Xuebing Yu and more
Potential Business Impact:
Makes AI answer questions faster and better.
Large language models (LLMs) have revolutionized applications such as code completion, chatbots, and online classification. To elevate user experiences, service level objectives (SLOs) serve as crucial benchmarks for assessing inference services capabilities. In practice, an inference service processes multiple types of tasks, each with its own distinct SLO. To ensure satisfactory user experiences, each request's distinct SLOs should be considered in scheduling. However, existing designs lack this consideration, leading to insufficient hardware utility and suboptimal performance. This paper analyzes scenarios to process tasks with varying SLOs, and introduces a simulated annealing-based scheduler to decide request priority sequence based on a request's SLO, input lengths, and possible output lengths. As the first specialized scheduler for multi-SLO scenarios, this work improves SLO attainment by up to 5x and reduces average latency by 31.6% on Python-Code-23k-ShareGPT and ShareGPT_Vicuna_unfiltered datasets, compared to current state-of-the-art framework vLLM and a new framework LMDeploy.
Similar Papers
Tempo: Application-aware LLM Serving with Mixed SLO Requirements
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and better.
SLOs-Serve: Optimized Serving of Multi-SLO LLMs
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster.
Optimal Scheduling Algorithms for LLM Inference: Theory and Practice
Machine Learning (CS)
Makes AI answer questions much faster.