On the energy efficiency of sparse matrix computations on multi-GPU clusters
By: Massimo Bernaschi , Alessandro Celestini , Pasqua D'Ambra and more
Potential Business Impact:
Makes supercomputers use less power for big science.
We investigate the energy efficiency of a library designed for parallel computations with sparse matrices. The library leverages high-performance, energy-efficient Graphics Processing Unit (GPU) accelerators to enable large-scale scientific applications. Our primary development objective was to maximize parallel performance and scalability in solving sparse linear systems whose dimensions far exceed the memory capacity of a single node. To this end, we devised methods that expose a high degree of parallelism while optimizing algorithmic implementations for efficient multi-GPU usage. Previous work has already demonstrated the library's performance efficiency on large-scale systems comprising thousands of NVIDIA GPUs, achieving improvements over state-of-the-art solutions. In this paper, we extend those results by providing energy profiles that address the growing sustainability requirements of modern HPC platforms. We present our methodology and tools for accurate runtime energy measurements of the library's core components and discuss the findings. Our results confirm that optimizing GPU computations and minimizing data movement across memory and computing nodes reduces both time-to-solution and energy consumption. Moreover, we show that the library delivers substantial advantages over comparable software frameworks on standard benchmarks.
Similar Papers
Racing to Idle: Energy Efficiency of Matrix Multiplication on Heterogeneous CPU and GPU Architectures
Distributed, Parallel, and Cluster Computing
Makes computers faster and use less power.
Power-Capping Metric Evaluation for Improving Energy Efficiency in HPC Applications
Distributed, Parallel, and Cluster Computing
Saves computer energy for faster science.
Characterizing GPU Energy Usage in Exascale-Ready Portable Science Applications
Performance
Saves energy on supercomputers by using less precise numbers.