Managing Multi Instance GPUs for High Throughput and Energy Savings
By: Abhijeet Saraha , Yuanbo Li , Chris Porter and more
Potential Business Impact:
Makes computer chips run much faster and better.
Modern GPUs such as the Ampere series (A30, A100) as well as the Hopper series (H100, H200) offer performance as well as security isolation features. They also support a good amount of concurrency, but taking advantage of it can be quite challenging due to the complex constraints on partitioning the chip. In this work, we develop partitioning and scheduling schemes for a variety of workloads, ranging from scientific to modern ML workloads, including LLMs. We develop several schemes involving dynamic memory estimation, partition fusion and partition fission. We also support process restart to recover from out-of-memory errors for workloads and early restart as an optimization. This approach yields up to 6.20x throughput and 5.93x energy improvements for general workloads; and we see 1.59x and 1.12x improvement to throughput and energy, respectively, for ML workloads on an A100 GPU. We leverage this technique on LLM workloads and show good improvements, including up to 1.43x throughput improvement and 1.11x energy savings.
Similar Papers
Understanding the Landscape of Ampere GPU Memory Errors
Distributed, Parallel, and Cluster Computing
Finds computer errors to make supercomputers more reliable.
Understanding the Landscape of Ampere GPU Memory Errors
Distributed, Parallel, and Cluster Computing
Finds computer errors to make supercomputers faster.
Characterizing GPU Energy Usage in Exascale-Ready Portable Science Applications
Performance
Saves energy on supercomputers by using less precise numbers.