Hybrid Learning and Optimization-Based Dynamic Scheduling for DL Workloads on Heterogeneous GPU Clusters
By: Shruti Dongare , Redwan Ibne Seraj Khan , Hadeel Albahar and more
Potential Business Impact:
Makes computer jobs run faster and use less power.
Modern cloud platforms increasingly host large-scale deep learning (DL) workloads, demanding high-throughput, low-latency GPU scheduling. However, the growing heterogeneity of GPU clusters and limited visibility into application characteristics pose major challenges for existing schedulers, which often rely on offline profiling or application-specific assumptions. We present RLTune, an application-agnostic reinforcement learning (RL)-based scheduling framework that dynamically prioritizes and allocates DL jobs on heterogeneous GPU clusters. RLTune integrates RL-driven prioritization with MILP-based job-to-node mapping to optimize system-wide objectives such as job completion time (JCT), queueing delay, and resource utilization. Trained on large-scale production traces from Microsoft Philly, Helios, and Alibaba, RLTune improves GPU utilization by up to 20%, reduces queueing delay by up to 81%, and shortens JCT by as much as 70 percent. Unlike prior approaches, RLTune generalizes across diverse workloads without requiring per-job profiling, making it practical for cloud providers to deploy at scale for more efficient, fair, and sustainable DL workload management.
Similar Papers
Semantic-Aware Scheduling for GPU Clusters with Large Language Models
Machine Learning (CS)
Makes computer jobs finish much faster.
Resource Heterogeneity-Aware and Utilization-Enhanced Scheduling for Deep Learning Clusters
Distributed, Parallel, and Cluster Computing
Makes computer learning faster and better.
Enhancing Cluster Scheduling in HPC: A Continuous Transfer Learning for Real-Time Optimization
Distributed, Parallel, and Cluster Computing
Makes computer jobs run faster and smarter.