Score: 0

Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere

Published: October 5, 2025 | arXiv ID: 2510.04060v1

By: Tong Mao, Jinchao Xu

Potential Business Impact:

Shows how fast computers learn with certain math.

Business Areas:
A/B Testing Data and Analytics

We prove a saturation theorem for linearized shallow ReLU$^k$ neural networks on the unit sphere $\mathbb S^d$. For any antipodally quasi-uniform set of centers, if the target function has smoothness $r>\tfrac{d+2k+1}{2}$, then the best $\mathcal{L}^2(\mathbb S^d)$ approximation cannot converge faster than order $n^{-\frac{d+2k+1}{2d}}$. This lower bound matches existing upper bounds, thereby establishing the exact saturation order $\tfrac{d+2k+1}{2d}$ for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLU$^k$ networks outperform finite elements under equal degrees $k$, this advantage is intrinsically limited.

Page Count
15 pages

Category
Mathematics:
Numerical Analysis (Math)