MSched: GPU Multitasking via Proactive Memory Scheduling
By: Weihang Shen , Yinqiu Chen , Rong Chen and more
The limited HBM capacity has become the primary bottleneck for hosting an increasing number of larger-scale GPU tasks. While demand paging extends capacity via host DRAM, it incurs up to 78x slowdown due to the massive working sets and poor locality of GPU workloads. We observe, however, that GPU memory access patterns are inherently predictable via kernel launch arguments and their asynchronous execution nature. Leveraging this, we propose MSched, an OS-level scheduler that extends GPU context switching to include proactive working set preparation, thereby coalescing fragmented, eventual, and expensive page faults into a single efficient migration. MSched employs a template-based approach to predict working sets with near-perfect accuracy and proposes a co-design between task scheduler and memory manager to enforce a globally optimal page placement policy. Evaluation demonstrates that MSched outperforms demand paging by up to 11.05x for scientific and deep learning workloads, and 57.88x for LLM under memory oversubscription.
Similar Papers
Reducing Fragmentation and Starvation in GPU Clusters through Dynamic Multi-Objective Scheduling
Distributed, Parallel, and Cluster Computing
Makes AI computers use their power better.
Towards Efficient and Practical GPU Multitasking in the Era of LLM
Operating Systems
Lets computers do many jobs at once.
GPUVM: GPU-driven Unified Virtual Memory
Distributed, Parallel, and Cluster Computing
Lets computers use more memory for faster work.