Dyadic Factorization and Efficient Inversion of Sparse Positive Definite Matrices
By: Michał Kos, Krzysztof Podgórski, Hanqing Wu
Potential Business Impact:
Makes computers solve hard math problems faster.
In inverting large sparse matrices, the key difficulty lies in effectively exploiting sparsity during the inversion process. One well-established strategy is the nested dissection, which seeks the so-called sparse Cholesky factorization. We argue that the matrices for which such factors can be found are characterized by a hidden dyadic sparsity structure. This paper builds on that idea by proposing an efficient approach for inverting such matrices. The method consists of two independent steps: the first packs the matrix into a dyadic form, while the second performs a sparse (dyadic) Gram-Schmidt orthogonalization of the packed matrix. The novel packing procedure works by recovering block-tridiagonal structures, focusing on aggregating terms near the diagonal using the $l_1$-norm, which contrasts with traditional methods that prioritize minimizing bandwidth, i.e. the $l_\infty$-norm. The algorithm performs particularly well for matrices that can be packed into banded or dyadic forms which are moderately dense. Due to the properties of $l_1$-norm, the packing step can be applied iteratively to reconstruct the hidden dyadic structure, which corresponds to the detection of separators in the nested dissection method. We explore the algebraic properties of dyadic-structured matrices and present an algebraic framework that allows for a unified mathematical treatment of both sparse factorization and efficient inversion of factors. For matrices with a dyadic structure, we introduce an optimal inversion algorithm and evaluate its computational complexity. The proposed inversion algorithm and core algebraic operations for dyadic matrices are implemented in the R package DyadiCarma, utilizing Rcpp and RcppArmadillo for high-performance computing. An independent R-based matrix packing module, supported by C++ code, is also provided.
Similar Papers
Chebyshev smoothing with adaptive block-FSAI preconditioners for the multilevel solution of higher-order problems
Numerical Analysis
Makes computer math problems solve much faster.
Chebyshev smoothing with adaptive block-FSAI preconditioners for the multilevel solution of higher-order problems
Numerical Analysis
Solves hard math problems faster with new computer tricks.
A Unified Perspective on Orthogonalization and Diagonalization
Numerical Analysis
Unifies math tools, making computers faster and more stable.