Score: 2

cMPI: Using CXL Memory Sharing for MPI One-Sided and Two-Sided Inter-Node Communications

Published: October 7, 2025 | arXiv ID: 2510.05476v1

By: Xi Wang , Bin Ma , Jongryool Kim and more

BigTech Affiliations: SK Hynix

Potential Business Impact:

Makes supercomputers share data much faster.

Business Areas:
Meeting Software Messaging and Telecommunications, Software

Message Passing Interface (MPI) is a foundational programming model for high-performance computing. MPI libraries traditionally employ network interconnects (e.g., Ethernet and InfiniBand) and network protocols (e.g., TCP and RoCE) with complex software stacks for cross-node communication. We present cMPI, the first work to optimize MPI point-to-point communication (both one-sided and two-sided) using CXL memory sharing on a real CXL platform, transforming cross-node communication into memory transactions and data copies within CXL memory, bypassing traditional network protocols. We analyze performance across various interconnects and find that CXL memory sharing achieves 7.2x-8.1x lower latency than TCP-based interconnects deployed in small- and medium-scale clusters. We address challenges of CXL memory sharing for MPI communication, including data object management over the dax representation [50], cache coherence, and atomic operations. Overall, cMPI outperforms TCP over standard Ethernet NIC and high-end SmartNIC by up to 49x and 72x in latency and bandwidth, respectively, for small messages.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡°πŸ‡· South Korea, United States

Page Count
13 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing