Reexamining Paradigms of End-to-End Data Movement
By: Chin Fang , Timothy Stitt , Michael J. McManus and more
Potential Business Impact:
Makes data move faster, even far away.
The pursuit of high-performance data transfer often focuses on raw network bandwidth, and international links of 100 Gbps or higher are frequently considered the primary enabler. While necessary, this network-centric view is incomplete, equating provisioned link speeds with practical, sustainable data movement capabilities across the entire edge-to-core spectrum. This paper investigates six common paradigms, from the often-cited constraints of network latency and TCP congestion control algorithms to host-side factors such as CPU performance and virtualization that critically impact data movement workflows. We validated our findings using a latency-emulation-capable testbed for high-speed WAN performance prediction and through extensive production measurements from resource-constrained edge environments to a 100 Gbps operational link connecting Switzerland and California, U.S. These results show that the principal bottlenecks often reside outside the network core, and that a holistic hardware-software co-design ensures consistent performance, whether moving data at 1 Gbps or 100 Gbps and faster. This approach effectively closes the fidelity gap between benchmark results and diverse and complex production environments.
Similar Papers
Combined power management and congestion control in High-Speed Ethernet-based Networks for Supercomputers and Data Centers
Hardware Architecture
Makes supercomputers faster and use less power.
Seamless Transitions: A Comprehensive Review of Live Migration Technologies
Distributed, Parallel, and Cluster Computing
Moves running computer programs between machines easily.
Bandwidth-Aware Network Topology Optimization for Decentralized Learning
Distributed, Parallel, and Cluster Computing
Makes computer learning faster by improving network connections.