Score: 1

Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$

Published: April 3, 2025 | arXiv ID: 2504.02445v1

By: Cornelia Schneider, Samuel Probst

Potential Business Impact:

Makes AI learn any task with simple math.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We show that there are no non-trivial closed subspaces of $L_2(\mathbb{R}^n)$ that are invariant under invertible affine transformations. We apply this result to neural networks showing that any nonzero $L_2(\mathbb{R})$ function is an adequate activation function in a one hidden layer neural network in order to approximate every function in $L_2(\mathbb{R})$ with any desired accuracy. This generalizes the universal approximation properties of neural networks in $L_2(\mathbb{R})$ related to Wiener's Tauberian Theorems. Our results extend to the spaces $L_p(\mathbb{R})$ with $p>1$.

Country of Origin
🇩🇪 Germany

Page Count
7 pages

Category
Mathematics:
Functional Analysis