Understanding the Landscape of Ampere GPU Memory Errors
By: Zhu Zhu , Yu Sun , Dhatri Parakal and more
Potential Business Impact:
Finds computer errors to make supercomputers faster.
Graphics Processing Units (GPUs) have become a de facto solution for accelerating high-performance computing (HPC) applications. Understanding their memory error behavior is an essential step toward achieving efficient and reliable HPC systems. In this work, we present a large-scale cross-supercomputer study to characterize GPU memory reliability, covering three supercomputers - Delta, Polaris, and Perlmutter - all equipped with NVIDIA A100 GPUs. We examine error logs spanning 67.77 million GPU device-hours across 10,693 GPUs. We compare error rates and mean-time-between-errors (MTBE) and highlight both shared and distinct error characteristics among these three systems. Based on these observations and analyses, we discuss the implications and lessons learned, focusing on the reliable operation of supercomputers, the choice of checkpointing interval, and the comparison of reliability characteristics with those of previous-generation GPUs. Our characterization study provides valuable insights into fault-tolerant HPC system design and operation, enabling more efficient execution of HPC applications.
Similar Papers
Understanding the Landscape of Ampere GPU Memory Errors
Distributed, Parallel, and Cluster Computing
Finds computer errors to make supercomputers more reliable.
Characterizing GPU Resilience and Impact on AI/HPC Systems
Distributed, Parallel, and Cluster Computing
Fixes computer errors in big AI machines.
Managing Multi Instance GPUs for High Throughput and Energy Savings
Distributed, Parallel, and Cluster Computing
Makes computer chips run much faster and better.