cMPI: Using CXL Memory Sharing for MPI One-Sided and Two-Sided Inter-Node Communications
By: Xi Wang , Bin Ma , Jongryool Kim and more
Potential Business Impact:
Makes supercomputers share data much faster.
Message Passing Interface (MPI) is a foundational programming model for high-performance computing. MPI libraries traditionally employ network interconnects (e.g., Ethernet and InfiniBand) and network protocols (e.g., TCP and RoCE) with complex software stacks for cross-node communication. We present cMPI, the first work to optimize MPI point-to-point communication (both one-sided and two-sided) using CXL memory sharing on a real CXL platform, transforming cross-node communication into memory transactions and data copies within CXL memory, bypassing traditional network protocols. We analyze performance across various interconnects and find that CXL memory sharing achieves 7.2x-8.1x lower latency than TCP-based interconnects deployed in small- and medium-scale clusters. We address challenges of CXL memory sharing for MPI communication, including data object management over the dax representation [50], cache coherence, and atomic operations. Overall, cMPI outperforms TCP over standard Ethernet NIC and high-end SmartNIC by up to 49x and 72x in latency and bandwidth, respectively, for small messages.
Similar Papers
MPI-over-CXL: Enhancing Communication Efficiency in Distributed HPC Systems
Distributed, Parallel, and Cluster Computing
Makes supercomputers share info faster, no copying.
PIM or CXL-PIM? Understanding Architectural Trade-offs Through Large-Scale Benchmarking
Emerging Technologies
Makes computers faster by moving work closer to memory.
PIM or CXL-PIM? Understanding Architectural Trade-offs Through Large-Scale Benchmarking
Emerging Technologies
Makes computers faster by moving work closer to data.