Gradient Descent as a Shrinkage Operator for Spectral Bias
By: Simon Lucey
Potential Business Impact:
Makes AI learn faster by controlling its "thinking" speed.
We generalize the connection between activation function and spline regression/smoothing and characterize how this choice may influence spectral bias within a 1D shallow network. We then demonstrate how gradient descent (GD) can be reinterpreted as a shrinkage operator that masks the singular values of a neural network's Jacobian. Viewed this way, GD implicitly selects the number of frequency components to retain, thereby controlling the spectral bias. An explicit relationship is proposed between the choice of GD hyperparameters (learning rate & number of iterations) and bandwidth (the number of active components). GD regularization is shown to be effective only with monotonic activation functions. Finally, we highlight the utility of non-monotonic activation functions (sinc, Gaussian) as iteration-efficient surrogates for spectral bias.
Similar Papers
The Spectral Bias of Shallow Neural Network Learning is Shaped by the Choice of Non-linearity
Machine Learning (CS)
Helps smart computers learn better without forgetting.
Gradient Descent Converges Linearly to Flatter Minima than Gradient Flow in Shallow Linear Networks
Machine Learning (CS)
Makes computer learning faster and better.
Non-Singularity of the Gradient Descent map for Neural Networks with Piecewise Analytic Activations
Optimization and Control
Helps computers learn better, even complex ones.