Hierarchical Tucker Low-Rank Matrices: Construction and Matrix-Vector Multiplication
By: Yingzhou Li, Jingyu Liu
Potential Business Impact:
Makes computer math faster and use less memory.
In this paper, a hierarchical Tucker low-rank (HTLR) matrix is proposed to approximate non-oscillatory kernel functions in linear complexity. The HTLR matrix is based on the hierarchical matrix, with the low-rank blocks replaced by Tucker low-rank blocks. Using high-dimensional interpolation as well as tensor contractions, algorithms for the construction and matrix-vector multiplication of HTLR matrices are proposed admitting linear and quasi-linear complexities respectively. Numerical experiments demonstrate that the HTLR matrix performs well in both memory and runtime. Furthermore, the HTLR matrix can also be applied on quasi-uniform grids in addition to uniform grids, enhancing its versatility.
Similar Papers
Parametric Hierarchical Matrix Approximations to Kernel Matrices
Numerical Analysis
Makes computer math problems solve 100x faster.
Parametric Hierarchical Matrix Approximations to Kernel Matrices
Numerical Analysis
Speeds up computer calculations for smart machines.
Matrices over a Hilbert space and their low-rank approximation
Numerical Analysis
Makes computers solve hard math problems faster.