Score: 0

Reexamining Paradigms of End-to-End Data Movement

Published: December 17, 2025 | arXiv ID: 2512.15028v1

By: Chin Fang , Timothy Stitt , Michael J. McManus and more

Potential Business Impact:

Makes data move faster, even far away.

Business Areas:
Data Center Hardware, Information Technology

The pursuit of high-performance data transfer often focuses on raw network bandwidth, and international links of 100 Gbps or higher are frequently considered the primary enabler. While necessary, this network-centric view is incomplete, equating provisioned link speeds with practical, sustainable data movement capabilities across the entire edge-to-core spectrum. This paper investigates six common paradigms, from the often-cited constraints of network latency and TCP congestion control algorithms to host-side factors such as CPU performance and virtualization that critically impact data movement workflows. We validated our findings using a latency-emulation-capable testbed for high-speed WAN performance prediction and through extensive production measurements from resource-constrained edge environments to a 100 Gbps operational link connecting Switzerland and California, U.S. These results show that the principal bottlenecks often reside outside the network core, and that a holistic hardware-software co-design ensures consistent performance, whether moving data at 1 Gbps or 100 Gbps and faster. This approach effectively closes the fidelity gap between benchmark results and diverse and complex production environments.

Page Count
20 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing