Approximation Rates of Shallow Neural Networks: Barron Spaces, Activation Functions and Optimality Analysis
By: Jian Lu, Xiaohuang Huang
Potential Business Impact:
Makes AI learn better with fewer steps.
This paper investigates the approximation properties of shallow neural networks with activation functions that are powers of exponential functions. It focuses on the dependence of the approximation rate on the dimension and the smoothness of the function being approximated within the Barron function space. We examine the approximation rates of ReLU$^{k}$ activation functions, proving that the optimal rate cannot be achieved under $\ell^{1}$-bounded coefficients or insufficient smoothness conditions. We also establish optimal approximation rates in various norms for functions in Barron spaces and Sobolev spaces, confirming the curse of dimensionality. Our results clarify the limits of shallow neural networks' approximation capabilities and offer insights into the selection of activation functions and network structures.
Similar Papers
Barron Space Representations for Elliptic PDEs with Homogeneous Boundary Conditions
Numerical Analysis
Lets computers solve hard math problems faster.
Nonlocal techniques for the analysis of deep ReLU neural network approximations
Machine Learning (CS)
Makes AI learn better from fewer examples.
Does the Barron space really defy the curse of dimensionality?
Functional Analysis
Makes AI learn better by understanding complex patterns.