REACH: Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks
By: Zhiwei Yu , Chengze Du , Heng Xu and more
Potential Business Impact:
Makes shared computer power work better for AI.
Community GPU platforms are emerging as a cost-effective and democratized alternative to centralized GPU clusters for AI workloads, aggregating idle consumer GPUs from globally distributed and heterogeneous environments. However, their extreme hardware/software diversity, volatile availability, and variable network conditions render traditional schedulers ineffective, leading to suboptimal task completion. In this work, we present REACH (Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks), a Transformer-based reinforcement learning framework that redefines task scheduling as a sequence scoring problem to balance performance, reliability, cost, and network efficiency. By modeling both global GPU states and task requirements, REACH learns to adaptively co-locate computation with data, prioritize critical jobs, and mitigate the impact of unreliable resources. Extensive simulation results show that REACH improves task completion rates by up to 17%, more than doubles the success rate for high-priority tasks, and reduces bandwidth penalties by over 80% compared to state-of-the-art baselines. Stress tests further demonstrate its robustness to GPU churn and network congestion, while scalability experiments confirm its effectiveness in large-scale, high-contention scenarios.
Similar Papers
REACH: Reinforcement Learning for Adaptive Microservice Rescheduling in the Cloud-Edge Continuum
Distributed, Parallel, and Cluster Computing
Makes apps faster by moving them closer to you.
Hybrid Learning and Optimization-Based Dynamic Scheduling for DL Workloads on Heterogeneous GPU Clusters
Distributed, Parallel, and Cluster Computing
Makes computer jobs run faster and use less power.
HetRL: Efficient Reinforcement Learning for LLMs in Heterogeneous Environments
Distributed, Parallel, and Cluster Computing
Trains AI faster on different kinds of computers.