Score: 0

Gradient Descent as a Shrinkage Operator for Spectral Bias

Published: April 25, 2025 | arXiv ID: 2504.18207v1

By: Simon Lucey

Potential Business Impact:

Makes AI learn faster by controlling its "thinking" speed.

Business Areas:
Data Visualization Data and Analytics, Design, Information Technology, Software

We generalize the connection between activation function and spline regression/smoothing and characterize how this choice may influence spectral bias within a 1D shallow network. We then demonstrate how gradient descent (GD) can be reinterpreted as a shrinkage operator that masks the singular values of a neural network's Jacobian. Viewed this way, GD implicitly selects the number of frequency components to retain, thereby controlling the spectral bias. An explicit relationship is proposed between the choice of GD hyperparameters (learning rate & number of iterations) and bandwidth (the number of active components). GD regularization is shown to be effective only with monotonic activation functions. Finally, we highlight the utility of non-monotonic activation functions (sinc, Gaussian) as iteration-efficient surrogates for spectral bias.

Country of Origin
🇦🇺 Australia

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)