Performance and Numerical Aspects of Decompositional Factorizations with FP64 Floating-Point Emulation in INT8
By: Piotr Luszczek , Vijay Gadepally , LaToya Anderson and more
Potential Business Impact:
Makes computers faster and use less power.
Mixing precisions for performance has been an ongoing trend as the modern hardware accelerators started including new, and mostly lower-precision, data formats. The advantage of using them is a great potential of performance gain and energy savings. The disadvantage are the numerical issues not present in the standard-mandated floating-point formats. Split integer emulation of FP64 takes this to an extreme with the computation performed only by fixed-point tensor core units. We present the new issues the emulation faces for practical cases involving dense linear solver. We show extensive numerical tests indicating the effect of extended numerical range of matrix entries. We also scaled the input sizes to study the performance and numerical profiles on the NVIDIA Hopper GPUs.
Similar Papers
Scaling the memory wall using mixed-precision -- HPG-MxP on an exascale machine
Distributed, Parallel, and Cluster Computing
Makes supercomputers run science problems 1.6x faster.
Guaranteed DGEMM Accuracy While Using Reduced Precision Tensor Cores Through Extensions of the Ozaki Scheme
Distributed, Parallel, and Cluster Computing
Makes computers do hard math faster and more accurately.
The Cambrian Explosion of Mixed-Precision Matrix Multiplication for Quantized Deep Learning Inference
Computation and Language
Makes computers do math faster for AI.