Provable wavelet-based neural approximation
By: Youngmi Hur, Hyojae Lim, Mikyoung Lim
Potential Business Impact:
Helps computers learn any pattern, even tricky ones.
In this paper, we develop a wavelet-based theoretical framework for analyzing the universal approximation capabilities of neural networks over a wide range of activation functions. Leveraging wavelet frame theory on the spaces of homogeneous type, we derive sufficient conditions on activation functions to ensure that the associated neural network approximates any functions in the given space, along with an error estimate. These sufficient conditions accommodate a variety of smooth activation functions, including those that exhibit oscillatory behavior. Furthermore, by considering the $L^2$-distance between smooth and non-smooth activation functions, we establish a generalized approximation result that is applicable to non-smooth activations, with the error explicitly controlled by this distance. This provides increased flexibility in the design of network architectures.
Similar Papers
Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$
Functional Analysis
Makes AI learn any task with simple math.
Approximation properties of neural ODEs
Numerical Analysis
Makes smart computer programs learn better.
An in-depth look at approximation via deep and narrow neural networks
Machine Learning (CS)
Makes AI learn better by fixing its mistakes.