Does the Barron space really defy the curse of dimensionality?
By: Olov Schavemaker
Potential Business Impact:
Makes AI learn better by understanding complex patterns.
The Barron space has become famous in the theory of (shallow) neural networks because it seemingly defies the curse of dimensionality. And while the Barron space (and generalizations) indeed defies (defy) the curse of dimensionality from the POV of classical smoothness, we herein provide some evidence in favor of the idea that the Barron space (and generalizations) does (do) not defy the curse of dimensionality with a nonclassical notion of smoothness which relates naturally to "infinitely wide" shallow neural networks. Like how the Bessel potential spaces are defined via the Fourier transform, we define so-called ADZ spaces via the Mellin transform; these ADZ spaces encapsulate the nonclassical smoothness we alluded to earlier. 38 pages, will appear in the dissertation of the author
Similar Papers
Approximation Rates of Shallow Neural Networks: Barron Spaces, Activation Functions and Optimality Analysis
Machine Learning (CS)
Makes AI learn better with fewer steps.
Barron Space Representations for Elliptic PDEs with Homogeneous Boundary Conditions
Numerical Analysis
Lets computers solve hard math problems faster.
Nonlocal techniques for the analysis of deep ReLU neural network approximations
Machine Learning (CS)
Makes AI learn better from fewer examples.