Score: 1

Provable Benefits of Sinusoidal Activation for Modular Addition

Published: November 28, 2025 | arXiv ID: 2511.23443v1

By: Tianlong Huang, Zhiyuan Li

Potential Business Impact:

Makes AI learn math problems much better.

Business Areas:
A/B Testing Data and Analytics

This paper studies the role of activation functions in learning modular addition with two-layer neural networks. We first establish a sharp expressivity gap: sine MLPs admit width-$2$ exact realizations for any fixed length $m$ and, with bias, width-$2$ exact realizations uniformly over all lengths. In contrast, the width of ReLU networks must scale linearly with $m$ to interpolate, and they cannot simultaneously fit two lengths with different residues modulo $p$. We then provide a novel Natarajan-dimension generalization bound for sine networks, yielding nearly optimal sample complexity $\widetilde{\mathcal{O}}(p)$ for ERM over constant-width sine networks. We also derive width-independent, margin-based generalization for sine networks in the overparametrized regime and validate it. Empirically, sine networks generalize consistently better than ReLU networks across regimes and exhibit strong length extrapolation.

Repos / Data Links

Page Count
60 pages

Category
Computer Science:
Machine Learning (CS)