Easy Acceleration with Distributed Arrays
By: Jeremy Kepner , Chansup Byun , LaToya Anderson and more
Potential Business Impact:
Makes computers run much faster using data.
High level programming languages and GPU accelerators are powerful enablers for a wide range of applications. Achieving scalable vertical (within a compute node), horizontal (across compute nodes), and temporal (over different generations of hardware) performance while retaining productivity requires effective abstractions. Distributed arrays are one such abstraction that enables high level programming to achieve highly scalable performance. Distributed arrays achieve this performance by deriving parallelism from data locality, which naturally leads to high memory bandwidth efficiency. This paper explores distributed array performance using the STREAM memory bandwidth benchmark on a variety of hardware. Scalable performance is demonstrated within and across CPU cores, CPU nodes, and GPU nodes. Horizontal scaling across multiple nodes was linear. The hardware used spans decades and allows a direct comparison of hardware improvements for memory bandwidth over this time range; showing a 10x increase in CPU core bandwidth over 20 years, 100x increase in CPU node bandwidth over 20 years, and 5x increase in GPU node bandwidth over 5 years. Running on hundreds of MIT SuperCloud nodes simultaneously achieved a sustained bandwidth $>$1 PB/s.
Similar Papers
Terabyte-Scale Analytics in the Blink of an Eye
Databases
Runs big data jobs 60 times faster.
High-Dimensional Data Processing: Benchmarking Machine Learning and Deep Learning Architectures in Local and Distributed Environments
Distributed, Parallel, and Cluster Computing
Teaches computers to learn from lots of information.
StarDist: A Code Generator for Distributed Graph Algorithms
Distributed, Parallel, and Cluster Computing
Makes big computer graphs work much faster.