ML Inference Scheduling with Predictable Latency
By: Haidong Zhao, Nikolaos Georgantas
Machine learning (ML) inference serving systems can schedule requests to improve GPU utilization and to meet service level objectives (SLOs) or deadlines. However, improving GPU utilization may compromise latency-sensitive scheduling, as concurrent tasks contend for GPU resources and thereby introduce interference. Given that interference effects introduce unpredictability in scheduling, neglecting them may compromise SLO or deadline satisfaction. Nevertheless, existing interference prediction approaches remain limited in several respects, which may restrict their usefulness for scheduling. First, they are often coarse-grained, which ignores runtime co-location dynamics and thus restricts their accuracy in interference prediction. Second, they tend to use a static prediction model, which may not effectively cope with different workload characteristics. To this end, we evaluate the potential limitations of existing interference prediction approaches and outline our ongoing work toward achieving efficient ML inference scheduling.
Similar Papers
SLO-Aware Scheduling for Large Language Model Inferences
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and better.
Adaptively Robust LLM Inference Optimization under Prediction Uncertainty
Machine Learning (CS)
Makes AI faster and use less power.
Adaptively Robust LLM Inference Optimization under Prediction Uncertainty
Machine Learning (CS)
Makes AI answer questions faster and use less power.