Score: 0

Efficient Low-Tubal-Rank Tensor Estimation via Alternating Preconditioned Gradient Descent

Published: December 8, 2025 | arXiv ID: 2512.07490v1

By: Zhiyu Liu , Zhi Han , Yandong Tang and more

Potential Business Impact:

Makes computer math problems solve much faster.

Business Areas:
A/B Testing Data and Analytics

The problem of low-tubal-rank tensor estimation is a fundamental task with wide applications across high-dimensional signal processing, machine learning, and image science. Traditional approaches tackle such a problem by performing tensor singular value decomposition, which is computationally expensive and becomes infeasible for large-scale tensors. Recent approaches address this issue by factorizing the tensor into two smaller factor tensors and solving the resulting problem using gradient descent. However, this kind of approach requires an accurate estimate of the tensor rank, and when the rank is overestimated, the convergence of gradient descent and its variants slows down significantly or even diverges. To address this problem, we propose an Alternating Preconditioned Gradient Descent (APGD) algorithm, which accelerates convergence in the over-parameterized setting by adding a preconditioning term to the original gradient and updating these two factors alternately. Based on certain geometric assumptions on the objective function, we establish linear convergence guarantees for more general low-tubal-rank tensor estimation problems. Then we further analyze the specific cases of low-tubal-rank tensor factorization and low-tubal-rank tensor recovery. Our theoretical results show that APGD achieves linear convergence even under over-parameterization, and the convergence rate is independent of the tensor condition number. Extensive simulations on synthetic data are carried out to validate our theoretical assertions.

Country of Origin
🇭🇰 Hong Kong

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)