Nonlocal techniques for the analysis of deep ReLU neural network approximations
By: Cornelia Schneider, Mario Ullrich, Jan Vybiral
Potential Business Impact:
Makes AI learn better from fewer examples.
Recently, Daubechies, DeVore, Foucart, Hanin, and Petrova introduced a system of piece-wise linear functions, which can be easily reproduced by artificial neural networks with the ReLU activation function and which form a Riesz basis of $L_2([0,1])$. This work was generalized by two of the authors to the multivariate setting. We show that this system serves as a Riesz basis also for Sobolev spaces $W^s([0,1]^d)$ and Barron classes ${\mathbb B}^s([0,1]^d)$ with smoothness $0<s<1$. We apply this fact to re-prove some recent results on the approximation of functions from these classes by deep neural networks. Our proof method avoids using local approximations and allows us to track also the implicit constants as well as to show that we can avoid the curse of dimension. Moreover, we also study how well one can approximate Sobolev and Barron functions by ANNs if only function values are known.
Similar Papers
An in-depth look at approximation via deep and narrow neural networks
Machine Learning (CS)
Makes AI learn better by fixing its mistakes.
Integral Representations of Sobolev Spaces via ReLU$^k$ Activation Function and Optimal Error Estimates for Linearized Networks
Numerical Analysis
Makes computers learn math faster and better.
Approximation theory for 1-Lipschitz ResNets
Machine Learning (CS)
Makes AI learn better and more reliably.