Towards a Higher Roofline for Matrix-Vector Multiplication in Matrix-Free HOSFEM
By: Zijian Cao , Qiao Sun , Tiangong Zhang and more
Potential Business Impact:
Computers solve math problems faster by recalculating.
Modern GPGPUs provide massive arithmetic throughput, yet many scientific kernels remain limited by memory bandwidth. In particular, repeatedly loading precomputed auxiliary data wastes abundant compute resources while stressing the memory hierarchy. A promising strategy is to replace memory traffic with inexpensive recomputation, thereby alleviating bandwidth pressure and enabling applications to better exploit heterogeneous compute units. Guided by this strategy, we optimize the high-order/spectral finite element method (HOSFEM), a widely used approach for solving PDEs. Its performance is largely determined by AxLocal, a matrix-free kernel for element-local matrix-vector multiplications. In AxLocal, geometric factors dominate memory accesses while contributing minimally to computation, creating a bandwidth bottleneck that caps the performance roofline. To address this challenge, we propose the first practical, low-overhead on-the-fly recomputation of geometric factors for trilinear and parallelepiped elements. This reformulation reduces data movement and raises the achievable roofline, revealing untapped optimization potential for tensor contractions. With hardware-aware techniques including loop unrolling, Tensor Core acceleration, and constant memory utilization, the optimized kernels reach 85%-100% of the roofline efficiency. Compared with state-of-the-art implementations in the Nek series, they deliver speedups of 1.74x-4.10x on NVIDIA A100 and 1.99x-3.78x on Hygon K100, leading to a 1.12x-1.40x improvement in the full HOSFEM benchmark. These results demonstrate that combining algorithmic reformulation with hardware-specific tuning can remove long-standing bottlenecks and fully exploit the performance potential of large-scale high-order simulations.
Similar Papers
Towards a Higher Roofline for Matrix-Vector Multiplication in Matrix-Free HOSFEM
Performance
Makes computer simulations run much faster.
Mixed-Precision Performance Portability of FFT-Based GPU-Accelerated Algorithms for Block-Triangular Toeplitz Matrices
Distributed, Parallel, and Cluster Computing
Makes supercomputers run faster on different parts.
Mixed-Precision Performance Portability of FFT-Based GPU-Accelerated Algorithms for Block-Triangular Toeplitz Matrices
Distributed, Parallel, and Cluster Computing
Makes supercomputers run faster on different parts.