Understanding the Communication Needs of Asynchronous Many-Task Systems -- A Case Study of HPX+LCI
By: Jiakun Yan, Hartmut Kaiser, Marc Snir
Potential Business Impact:
Makes supercomputers run science faster.
Asynchronous Many-Task (AMT) systems offer a potential solution for efficiently programming complicated scientific applications on extreme-scale heterogeneous architectures. However, they exhibit different communication needs from traditional bulk-synchronous parallel (BSP) applications, posing new challenges for underlying communication libraries. This work systematically studies the communication needs of AMTs and explores how communication libraries can be structured to better satisfy them through a case study of a real-world AMT system, HPX. We first examine its communication stack layout and formalize the communication abstraction that underlying communication libraries need to support. We then analyze its current MPI backend (parcelport) and identify four categories of needs that are not typical in the BSP model and are not well covered by the MPI standard. To bridge these gaps, we design from the native network layer and incorporate various techniques, including one-sided communication, queue-based completion notification, explicit progressing, and different ways of resource contention mitigation, in a new parcelport with an experimental communication library, LCI. Overall, the resulting LCI parcelport outperforms the existing MPI parcelport with up to 50x in microbenchmarks and 2x in a real-world application. Using it as a testbed, we design LCI parcelport variants to quantify the performance contributions of each technique. This work combines conceptual analysis and experiment results to offer a practical guideline for the future development of communication libraries and AMT communication layers.
Similar Papers
Contemplating a Lightweight Communication Interface for Asynchronous Many-Task Systems
Distributed, Parallel, and Cluster Computing
Makes computer programs talk to each other faster.
Examining MPI and its Extensions for Asynchronous Multithreaded Communication
Distributed, Parallel, and Cluster Computing
Makes supercomputers talk faster for science.
A HPX Communication Benchmark: Distributed FFT using Collectives
Distributed, Parallel, and Cluster Computing
Makes computer programs run 3x faster.