Linear regression with overparameterized linear neural networks: Tight upper and lower bounds for implicit $\ell^1$-regularization
By: Hannes Matt, Dominik Stöger
Potential Business Impact:
Deeper AI learns better from less data.
Modern machine learning models are often trained in a setting where the number of parameters exceeds the number of training samples. To understand the implicit bias of gradient descent in such overparameterized models, prior work has studied diagonal linear neural networks in the regression setting. These studies have shown that, when initialized with small weights, gradient descent tends to favor solutions with minimal $\ell^1$-norm - an effect known as implicit regularization. In this paper, we investigate implicit regularization in diagonal linear neural networks of depth $D\ge 2$ for overparameterized linear regression problems. We focus on analyzing the approximation error between the limit point of gradient flow trajectories and the solution to the $\ell^1$-minimization problem. By deriving tight upper and lower bounds on the approximation error, we precisely characterize how the approximation error depends on the scale of initialization $\alpha$. Our results reveal a qualitative difference between depths: for $D \ge 3$, the error decreases linearly with $\alpha$, whereas for $D=2$, it decreases at rate $\alpha^{1-\varrho}$, where the parameter $\varrho \in [0,1)$ can be explicitly characterized. Interestingly, this parameter is closely linked to so-called null space property constants studied in the sparse recovery literature. We demonstrate the asymptotic tightness of our bounds through explicit examples. Numerical experiments corroborate our theoretical findings and suggest that deeper networks, i.e., $D \ge 3$, may lead to better generalization, particularly for realistic initialization scales.
Similar Papers
Diagonal Linear Networks and the Lasso Regularization Path
Machine Learning (CS)
Training computers learns like a math shortcut.
Architecture independent generalization bounds for overparametrized deep ReLU networks
Machine Learning (CS)
Makes smart computer programs learn better, no matter how big.
Statistically guided deep learning
Statistics Theory
Makes computer learning more accurate and faster.