Score: 0

Low-Rank Tensor Decompositions for the Theory of Neural Networks

Published: August 25, 2025 | arXiv ID: 2508.18408v1

By: Ricardo Borsoi, Konstantin Usevich, Marianne Clausel

Potential Business Impact:

Explains why smart computer programs learn so well.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The groundbreaking performance of deep neural networks (NNs) promoted a surge of interest in providing a mathematical basis to deep learning theory. Low-rank tensor decompositions are specially befitting for this task due to their close connection to NNs and their rich theoretical results. Different tensor decompositions have strong uniqueness guarantees, which allow for a direct interpretation of their factors, and polynomial time algorithms have been proposed to compute them. Through the connections between tensors and NNs, such results supported many important advances in the theory of NNs. In this review, we show how low-rank tensor methods--which have been a core tool in the signal processing and machine learning communities--play a fundamental role in theoretically explaining different aspects of the performance of deep NNs, including their expressivity, algorithmic learnability and computational hardness, generalization, and identifiability. Our goal is to give an accessible overview of existing approaches (developed by different communities, ranging from computer science to mathematics) in a coherent and unified way, and to open a broader perspective on the use of low-rank tensor decompositions for the theory of deep NNs.

Country of Origin
🇫🇷 France

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)