A GPU-resident Memory-Aware Algorithm for Accelerating Bidiagonalization of Banded Matrices
By: Evelyne Ringoot, Rabab Alomairy, Alan Edelman
Potential Business Impact:
Makes computers solve math problems much faster.
The reduction of a banded matrix to a bidiagonal form is a crucial step in the Singular Value Decomposition (SVD), a cornerstone of scientific computing and AI. Despite being a highly parallel algorithm, it was previously believed to be unsuitable for GPU computation because it is memory bandwidth-bound. Recent developments in GPU hardware, including larger L1 memory per Streaming Multiprocessor/Compute Unit, have changed that. We present the first GPU algorithm for reducing a banded matrix to bidiagonal form as part of the NextLA.jl open-source software package. Our algorithm is based on previous CPU-based multicore parallel cache-efficient bulge chasing algorithms and adapted to optimize for GPU throughput. We leverage Julia Language's Array abstractions and KernelAbstractions to implement a single hardware- and data precision-agnostic function on NVIDIA, AMD, Intel, and Apple Metal GPUs for half, single, and double precision, and examine performance optimization across hardware architectures and data precision. We also develop a hardware-aware performance model and identify key hyperparameters, such as inner tilewidth and block concurrency, that govern optimal GPU execution for bandwidth-bound workloads. We demonstrate highly parallel bandwidth-bound algorithm on the GPU can outperform CPU-based implementations: the GPU algorithm outperforms multithreaded CPU High-Performance libraries PLASMA and SLATE as of matrix size 1024 x 1024 and by a factor over 100 for matrices of 32k x 32k. In addition, the performance of the algorithm increases linearly with matrix bandwidth size, making faster reduction of larger matrix bandwidths now also possible. With this work, we break memory bandwidth barriers, as well as matrix bandwidth barriers, resulting in orders-of-magnitude faster algorithms for the reduction of banded matrices to bidiagonal form on the GPU.
Similar Papers
Efficient GPU-Centered Singular Value Decomposition Using the Divide-and-Conquer Method
Distributed, Parallel, and Cluster Computing
Makes computers find patterns in data much faster.
Performant Unified GPU Kernels for Portable Singular Value Computation Across Hardware and Precision
Distributed, Parallel, and Cluster Computing
Makes computers learn faster with better math.
Design of A Low-Latency and Parallelizable SVD Dataflow Architecture on FPGA
Distributed, Parallel, and Cluster Computing
Speeds up computer analysis of big data streams.