Optimizing Data Transfer Performance and Energy Efficiency with Deep Reinforcement Learning
By: Hasubil Jamil , Jacob Goldverg , Elvis Rodrigues and more
Potential Business Impact:
Makes data move faster and use less power.
The rapid growth of data across fields of science and industry has increased the need to improve the performance of end-to-end data transfers while using the resources more efficiently. In this paper, we present a dynamic, multiparameter reinforcement learning (RL) framework that adjusts application-layer transfer settings during data transfers on shared networks. Our method strikes a balance between high throughput and low energy utilization by employing reward signals that focus on both energy efficiency and fairness. The RL agents can pause and resume transfer threads as needed, pausing during heavy network use and resuming when resources are available, to prevent overload and save energy. We evaluate several RL techniques and compare our solution with state-of-the-art methods by measuring computational overhead, adaptability, throughput, and energy consumption. Our experiments show up to 25% increase in throughput and up to 40% reduction in energy usage at the end systems compared to baseline methods, highlighting a fair and energy-efficient way to optimize data transfers in shared network environments.
Similar Papers
Dynamic Optimization of Storage Systems Using Reinforcement Learning Techniques
Operating Systems
Makes computer storage faster by learning automatically.
Dynamic Preference Multi-Objective Reinforcement Learning for Internet Network Management
Networking and Internet Architecture
Helps internet networks adapt to changing needs.
Diffusion-RL for Scalable Resource Allocation for 6G Networks
Networking and Internet Architecture
Makes phone networks faster and more reliable.