Stochastic Gradient Descent for Incomplete Tensor Linear Systems
By: Anna Ma, Deanna Needell, Alexander Xue
Potential Business Impact:
Fixes computer problems with missing information.
Solving large tensor linear systems poses significant challenges due to the high volume of data stored, and it only becomes more challenging when some of the data is missing. Recently, Ma et al. showed that this problem can be tackled using a stochastic gradient descent-based method, assuming that the missing data follows a uniform missing pattern. We adapt the technique by modifying the update direction, showing that the method is applicable under other missing data models. We prove convergence results and experimentally verify these results on synthetic data.
Similar Papers
A stochastic column-block gradient descent method for solving nonlinear systems of equations
Numerical Analysis
Solves hard math problems faster than before.
Global Convergence Analysis of Vanilla Gradient Descent for Asymmetric Matrix Completion
Machine Learning (CS)
Makes computers fill in missing data faster.
Revisiting Stochastic Approximation and Stochastic Gradient Descent
Optimization and Control
Helps computers learn better with messy data.