An Online Fragmentation-Aware GPU Scheduler for Multi-Tenant MIG-based Clouds
By: Marco Zambianco, Lorenzo Fasol, Roberto Doriguzzi-Corin
Potential Business Impact:
Makes more AI programs run on shared computer chips.
The explosive growth of AI applications has created unprecedented demand for GPU resources. Cloud providers meet this demand through GPU-as-a-Service platforms that offer rentable GPU resources for running AI workloads. In this context, the sharing of GPU resources between different tenants is essential to maximize the number of scheduled workloads. Among the various GPU sharing technologies, NVIDIA's Multi-Instance GPU (MIG) stands out by partitioning GPUs at hardware level into isolated slices with dedicated compute and memory, ensuring strong tenant isolation, preventing resource contention, and enhancing security. Despite these advantages, MIG's fixed partitioning introduces scheduling rigidity, leading to severe GPU fragmentation in multi-tenant environments, where workloads are continuously deployed and terminated. Fragmentation leaves GPUs underutilized, limiting the number of workloads that can be accommodated. To overcome this challenge, we propose a novel scheduling framework for MIG-based clouds that maximizes workload acceptance while mitigating fragmentation in an online, workload-agnostic setting. We introduce a fragmentation metric to quantify resource inefficiency and guide allocation decisions. Building on this metric, our greedy scheduling algorithm selects GPUs and MIG slices that minimize fragmentation growth for each incoming workload. We evaluate our approach against multiple baseline strategies under diverse workload distributions. Results demonstrate that our method consistently achieves higher workload acceptance rates, leading to an average 10% increase in the number of scheduled workloads in heavy load conditions, while using approximately the same number of GPUs as the benchmark methods.
Similar Papers
Flex-MIG: Enabling Distributed Execution on MIG
Distributed, Parallel, and Cluster Computing
Lets many computers share one powerful graphics chip.
On the Partitioning of GPU Power among Multi-Instances
Distributed, Parallel, and Cluster Computing
Tracks computer chip power use per task.
Reducing Fragmentation and Starvation in GPU Clusters through Dynamic Multi-Objective Scheduling
Distributed, Parallel, and Cluster Computing
Makes AI computers use their power better.