Efficient Algorithms for Verifying Kruskal Rank in Sparse Linear Regression and Related Applications
By: Fengqin Zhou
Potential Business Impact:
Helps computers understand complex math faster.
We present novel algorithmic techniques to efficiently verify the Kruskal rank of matrices that arise in sparse linear regression, tensor decomposition, and latent variable models. Our unified framework combines randomized hashing techniques with dynamic programming strategies, and is applicable in various settings, including binary fields, general finite fields, and integer matrices. In particular, our algorithms achieve a runtime of $\mathcal{O}\left(dk \cdot \left(nM\right)^{\lceil k / 2 \rceil}\right)$ while ensuring high-probability correctness. Our contributions include: A unified framework for verifying Kruskal rank across different algebraic settings; Rigorous runtime and high-probability guarantees that nearly match known lower bounds; Practical implications for identifiability in tensor decompositions and deep learning, particularly for the estimation of noise transition matrices.
Similar Papers
Efficient randomized algorithms for the fixed Tucker-rank problem of Tucker decomposition with adaptive shifts
Numerical Analysis
Makes big data problems solve much faster.
Almost-Optimal Local-Search Methods for Sparse Tensor PCA
Statistics Theory
Makes computers find patterns in data better.
Sparse Graph Reconstruction and Seriation for Large-Scale Image Stacks
Data Structures and Algorithms
Sorts messy data with fewer questions.