Integral Representations of Sobolev Spaces via ReLU$^k$ Activation Function and Optimal Error Estimates for Linearized Networks
By: Xinliang Liu, Tong Mao, Jinchao Xu
Potential Business Impact:
Makes computers learn math faster and better.
This paper presents two main theoretical results concerning shallow neural networks with ReLU$^k$ activation functions. We establish a novel integral representation for Sobolev spaces, showing that every function in $H^{\frac{d+2k+1}{2}}(\Omega)$ can be expressed as an $L^2$-weighted integral of ReLU$^k$ ridge functions over the unit sphere. This result mirrors the known representation of Barron spaces and highlights a fundamental connection between Sobolev regularity and neural network representations. Moreover, we prove that linearized shallow networks -- constructed by fixed inner parameters and optimizing only the linear coefficients -- achieve optimal approximation rates $O(n^{-\frac{1}{2}-\frac{2k+1}{2d}})$ in Sobolev spaces.
Similar Papers
Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere
Numerical Analysis
Shows how fast computers learn with certain math.
Nonlocal techniques for the analysis of deep ReLU neural network approximations
Machine Learning (CS)
Makes AI learn better from fewer examples.
Approximation Rates of Shallow Neural Networks: Barron Spaces, Activation Functions and Optimality Analysis
Machine Learning (CS)
Makes AI learn better with fewer steps.