Integrating and Characterizing HPC Task Runtime Systems for hybrid AI-HPC workloads
By: Andre Merzky , Mikhail Titov , Matteo Turilli and more
Potential Business Impact:
Makes supercomputers run science and AI faster.
Scientific workflows increasingly involve both HPC and machine-learning tasks, combining MPI-based simulations, training, and inference in a single execution. Launchers such as Slurm's srun constrain concurrency and throughput, making them unsuitable for dynamic and heterogeneous workloads. We present a performance study of RADICAL-Pilot (RP) integrated with Flux and Dragon, two complementary runtime systems that enable hierarchical resource management and high-throughput function execution. Using synthetic and production-scale workloads on Frontier, we characterize the task execution properties of RP across runtime configurations. RP+Flux sustains up to 930 tasks/s, and RP+Flux+Dragon exceeds 1,500 tasks/s with over 99.6% utilization. In contrast, srun peaks at 152 tasks/s and degrades with scale, with utilization below 50%. For IMPECCABLE.v2 drug discovery campaign, RP+Flux reduces makespan by 30-60% relative to srun/Slurm and increases throughput more than four times on up to 1,024. These results demonstrate hybrid runtime integration in RP as a scalable approach for hybrid AI-HPC workloads.
Similar Papers
Scalable Runtime Architecture for Data-driven, Hybrid HPC and ML Workflow Applications
Distributed, Parallel, and Cluster Computing
Lets computers learn from science data faster.
RHAPSODY: Execution of Hybrid AI-HPC Workflows at Scale
Distributed, Parallel, and Cluster Computing
Lets supercomputers run AI and science together.
Deep RC: A Scalable Data Engineering and Deep Learning Pipeline
Distributed, Parallel, and Cluster Computing
Speeds up science by linking data and learning.