Rethinking Inter-Process Communication with Memory Operation Offloading
By: Misun Park , Richi Dubey , Yifan Yuan and more
Potential Business Impact:
Makes computers share data faster, using less power.
As multimodal and AI-driven services exchange hundreds of megabytes per request, existing IPC runtimes spend a growing share of CPU cycles on memory copies. Although both hardware and software mechanisms are exploring memory offloading, current IPC stacks lack a unified runtime model to coordinate them effectively. This paper presents a unified IPC runtime suite that integrates both hardware- and software-based memory offloading into shared-memory communication. The system characterizes the interaction between offload strategies and IPC execution, including synchronization, cache visibility, and concurrency, and introduces multiple IPC modes that balance throughput, latency, and CPU efficiency. Through asynchronous pipelining, selective cache injection, and hybrid coordination, the system turns offloading from a device-specific feature into a general system capability. Evaluations on real-world workloads show instruction count reductions of up to 22%, throughput improvements of up to 2.1x, and latency reductions of up to 72%, demonstrating that coordinated IPC offloading can deliver tangible end-to-end efficiency gains in modern data-intensive systems.
Similar Papers
The Future of Memory: Limits and Opportunities
Hardware Architecture
Makes computers faster by putting memory closer.
Taming Offload Overheads in a Massively Parallel Open-Source RISC-V MPSoC: Analysis and Optimization
Distributed, Parallel, and Cluster Computing
Makes computer chips work much faster.
3D MPSoC with On-Chip Cache Support -- Design and Exploitation
Hardware Architecture
Makes computer chips work faster and use less power.