Semantic Scheduling for LLM Inference
By: Wenyue Hua , Dujian Ding , Yile Gu and more
Potential Business Impact:
Helps computers finish important jobs first.
Conventional operating system scheduling algorithms are largely content-ignorant, making decisions based on factors such as latency or fairness without considering the actual intents or semantics of processes. Consequently, these algorithms often do not prioritize tasks that require urgent attention or carry higher importance, such as in emergency management scenarios. However, recent advances in language models enable semantic analysis of processes, allowing for more intelligent and context-aware scheduling decisions. In this paper, we introduce the concept of semantic scheduling in scheduling of requests from large language models (LLM), where the semantics of the process guide the scheduling priorities. We present a novel scheduling algorithm with optimal time complexity, designed to minimize the overall waiting time in LLM-based prompt scheduling. To illustrate its effectiveness, we present a medical emergency management application, underscoring the potential benefits of semantic scheduling for critical, time-sensitive tasks. The code and data are available at https://github.com/Wenyueh/latency_optimization_with_priority_constraints.
Similar Papers
Prompt-Aware Scheduling for Low-Latency LLM Serving
Machine Learning (CS)
Makes AI answer questions much faster.
Optimal Scheduling Algorithms for LLM Inference: Theory and Practice
Machine Learning (CS)
Makes AI answer questions much faster.
Semantic-Aware Scheduling for GPU Clusters with Large Language Models
Machine Learning (CS)
Makes computer jobs finish much faster.