Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$
By: Cornelia Schneider, Samuel Probst
Potential Business Impact:
Makes AI learn any task with simple math.
We show that there are no non-trivial closed subspaces of $L_2(\mathbb{R}^n)$ that are invariant under invertible affine transformations. We apply this result to neural networks showing that any nonzero $L_2(\mathbb{R})$ function is an adequate activation function in a one hidden layer neural network in order to approximate every function in $L_2(\mathbb{R})$ with any desired accuracy. This generalizes the universal approximation properties of neural networks in $L_2(\mathbb{R})$ related to Wiener's Tauberian Theorems. Our results extend to the spaces $L_p(\mathbb{R})$ with $p>1$.
Similar Papers
Provable wavelet-based neural approximation
Machine Learning (CS)
Helps computers learn any pattern, even tricky ones.
Distributionally robust approximation property of neural networks
Machine Learning (Stat)
Makes AI learn better with more math.
Nonlocal techniques for the analysis of deep ReLU neural network approximations
Machine Learning (CS)
Makes AI learn better from fewer examples.