Scaling MPI Applications on Aurora
By: Huda Ibeid , Anthony-Trung Nguyen , Aditya Nishtala and more
Potential Business Impact:
Supercomputer solves huge science problems faster.
The Aurora supercomputer, which was deployed at Argonne National Laboratory in 2024, is currently one of three Exascale machines in the world on the Top500 list. The Aurora system is composed of over ten thousand nodes each of which contains six Intel Data Center Max Series GPUs, Intel's first data center-focused discrete GPU, and two Intel Xeon Max Series CPUs, Intel's first Xeon processor to contain HBM memory. To achieve Exascale performance the system utilizes the HPE Slingshot high-performance fabric interconnect to connect the nodes. Aurora is currently the largest deployment of the Slingshot fabric to date with nearly 85,000 Cassini NICs and 5,600 Rosetta switches connected in a dragonfly topology. The combination of the Intel powered nodes and the Slingshot network enabled Aurora to become the second fastest system on the Top500 list in June of 2024 and the fastest system on the HPL MxP benchmark. The system is one of the most powerful systems in the world dedicated to AI and HPC simulations for open science. This paper presents details of the Aurora system design with a particular focus on the network fabric and the approach taken to validating it. The performance of the systems is demonstrated through the presentation of the results of MPI benchmarks as well as performance benchmarks including HPL, HPL-MxP, Graph500, and HPCG run on a large fraction of the system. Additionally results are presented for a diverse set of applications including HACC, AMR-Wind, LAMMPS, and FMM demonstrating that Aurora provides the throughput, latency, and bandwidth across system needed to allow applications to perform and scale to large node counts and providing new levels of capability and enabling breakthrough science.
Similar Papers
Aurora: Architecting Argonne's First Exascale Supercomputer for Accelerated Scientific Discovery
Distributed, Parallel, and Cluster Computing
Supercomputer helps scientists make discoveries faster.
Performance Analysis of HPC applications on the Aurora Supercomputer: Exploring the Impact of HBM-Enabled Intel Xeon Max CPUs
Distributed, Parallel, and Cluster Computing
Makes supercomputers run faster for science.
Inter-APU Communication on AMD MI300A Systems via Infinity Fabric: a Deep Dive
Distributed, Parallel, and Cluster Computing
Makes supercomputers share data faster between parts.