Distributionally robust approximation property of neural networks
By: Mihriban Ceylan, David J. Prömel
Potential Business Impact:
Makes AI learn better with more math.
The universal approximation property uniformly with respect to weakly compact families of measures is established for several classes of neural networks. To that end, we prove that these neural networks are dense in Orlicz spaces, thereby extending classical universal approximation theorems even beyond the traditional $L^p$-setting. The covered classes of neural networks include widely used architectures like feedforward neural networks with non-polynomial activation functions, deep narrow networks with ReLU activation functions and functional input neural networks.
Similar Papers
Approximation properties of neural ODEs
Numerical Analysis
Makes smart computer programs learn better.
Beyond Universal Approximation Theorems: Algorithmic Uniform Approximation by Neural Networks Trained with Noisy Data
Machine Learning (Stat)
Teaches computers to learn from messy information.
Universal approximation property of neural stochastic differential equations
Probability
Neural networks can copy complex math problems.