Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere
By: Tong Mao, Jinchao Xu
Potential Business Impact:
Shows how fast computers learn with certain math.
We prove a saturation theorem for linearized shallow ReLU$^k$ neural networks on the unit sphere $\mathbb S^d$. For any antipodally quasi-uniform set of centers, if the target function has smoothness $r>\tfrac{d+2k+1}{2}$, then the best $\mathcal{L}^2(\mathbb S^d)$ approximation cannot converge faster than order $n^{-\frac{d+2k+1}{2d}}$. This lower bound matches existing upper bounds, thereby establishing the exact saturation order $\tfrac{d+2k+1}{2d}$ for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLU$^k$ networks outperform finite elements under equal degrees $k$, this advantage is intrinsically limited.
Similar Papers
Integral Representations of Sobolev Spaces via ReLU$^k$ Activation Function and Optimal Error Estimates for Linearized Networks
Numerical Analysis
Makes computers learn math faster and better.
The stability of shallow neural networks on spheres: A sharp spectral analysis
Numerical Analysis
Makes AI learn better and work more reliably.
Condition Numbers and Eigenvalue Spectra of Shallow Networks on Spheres
Numerical Analysis
Makes AI smarter and more stable.