Layout-Agnostic MPI Abstraction for Distributed Computing in Modern C++
By: Jiří Klepl, Martin Kruliš, Matyáš Brabec
Potential Business Impact:
Makes supercomputers easier to program.
Message Passing Interface (MPI) has been a well-established technology in the domain of distributed high-performance computing for several decades. However, one of its greatest drawbacks is a rather ancient pure-C interface. It lacks many useful features of modern languages (namely C++), like basic type-checking or support for generic code design. In this paper, we propose a novel abstraction for MPI, which we implemented as an extension of the C++ Noarr library. It follows Noarr paradigms (first-class layout and traversal abstraction) and offers layout-agnostic design of MPI applications. We also implemented a layout-agnostic distributed GEMM kernel as a case study to demonstrate the usability and syntax of the proposed abstraction. We show that the abstraction achieves performance comparable to the state-of-the-art MPI C++ bindings while allowing for a more flexible design of distributed applications.
Similar Papers
Concepts for designing modern C++ interfaces for MPI
Distributed, Parallel, and Cluster Computing
Makes supercomputers work better with new code.
Do MPI Derived Datatypes Actually Help? A Single-Node Cross-Implementation Study on Shared-Memory Communication
Distributed, Parallel, and Cluster Computing
Makes computer programs share data faster.
LLM-HPC++: Evaluating LLM-Generated Modern C++ and MPI+OpenMP Codes for Scalable Mandelbrot Set Computation
Distributed, Parallel, and Cluster Computing
AI writes super-fast computer programs for science.