Score: 2

SLO-Aware Scheduling for Large Language Model Inferences

Published: April 21, 2025 | arXiv ID: 2504.14966v2

By: Jinqi Huang , Yi Xiong , Xuebing Yu and more

BigTech Affiliations: Huawei

Potential Business Impact:

Makes AI answer questions faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have revolutionized applications such as code completion, chatbots, and online classification. To elevate user experiences, service level objectives (SLOs) serve as crucial benchmarks for assessing inference services capabilities. In practice, an inference service processes multiple types of tasks, each with its own distinct SLO. To ensure satisfactory user experiences, each request's distinct SLOs should be considered in scheduling. However, existing designs lack this consideration, leading to insufficient hardware utility and suboptimal performance. This paper analyzes scenarios to process tasks with varying SLOs, and introduces a simulated annealing-based scheduler to decide request priority sequence based on a request's SLO, input lengths, and possible output lengths. As the first specialized scheduler for multi-SLO scenarios, this work improves SLO attainment by up to 5x and reduces average latency by 31.6% on Python-Code-23k-ShareGPT and ShareGPT_Vicuna_unfiltered datasets, compared to current state-of-the-art framework vLLM and a new framework LMDeploy.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing