Accelerating Sparse MTTKRP for Small Tensor Decomposition on GPU
By: Sasindu Wijeratne, Rajgopal Kannan, Viktor Prasanna
Potential Business Impact:
Makes computers analyze big data much faster.
Sparse Matricized Tensor Times Khatri-Rao Product (spMTTKRP) is the bottleneck kernel of sparse tensor decomposition. In tensor decomposition, spMTTKRP is performed iteratively along all the modes of an input tensor. In this work, we propose a mode-specific tensor layout on GPU that uses multiple tensor copies, where each copy is optimized for a specific mode. The proposed tensor layout increases the data locality of external memory accesses and eliminates the intermediate values communicated between the GPU thread blocks and the GPU global memory. We also propose a tensor partitioning scheme to optimally distribute the total computations among GPU streaming multiprocessors based on the sparsity and the dimensions of the input tensor. Our approach achieves a geometric mean speedup of 2.4x, 7.9x, and 8.9x in total execution time compared with the state-of-the-art GPU baselines.
Similar Papers
AMPED: Accelerating MTTKRP for Billion-Scale Sparse Tensor Decomposition on Multiple GPUs
Distributed, Parallel, and Cluster Computing
Speeds up computer analysis of huge, messy data.
AMPED: Accelerating MTTKRP for Billion-Scale Sparse Tensor Decomposition on Multiple GPUs
Distributed, Parallel, and Cluster Computing
Speeds up computer analysis of huge, messy data.
A Performance Portable Matrix Free Dense MTTKRP in GenTen
Mathematical Software
Makes computers find patterns in data faster.