Analytic Regularity and Approximation Limits of Coefficient-Constrained Shallow Networks
By: Jean-Gabriel Attali
Potential Business Impact:
Neural networks can't learn some things better than math.
We study approximation limits of single-hidden-layer neural networks with analytic activation functions under global coefficient constraints. Under uniform $\ell^1$ bounds, or more generally sub-exponential growth of the coefficients, we show that such networks generate model classes with strong quantitative regularity, leading to uniform analyticity of the realized functions. As a consequence, up to an exponentially small residual term, the error of best network approximation on generic target functions is bounded from below by the error of best polynomial approximation. In particular, networks with analytic activation functions with controlled coefficients cannot outperform classical polynomial approximation rates on non-analytic targets. The underlying rigidity phenomenon extends to smoother, non-analytic activations satisfying Gevrey-type regularity assumptions, yielding sub-exponential variants of the approximation barrier. The analysis is entirely deterministic and relies on a comparison argument combined with classical Bernstein-type estimates; extensions to higher dimensions are also discussed.
Similar Papers
Approximation Rates of Shallow Neural Networks: Barron Spaces, Activation Functions and Optimality Analysis
Machine Learning (CS)
Makes AI learn better with fewer steps.
Geometry and Optimization of Shallow Polynomial Networks
Machine Learning (CS)
Teaches computers to learn from data patterns.
Geometry and Optimization of Shallow Polynomial Networks
Machine Learning (CS)
Helps computers learn better by understanding math.