Provable Benefits of Sinusoidal Activation for Modular Addition
By: Tianlong Huang, Zhiyuan Li
Potential Business Impact:
Makes AI learn math problems much better.
This paper studies the role of activation functions in learning modular addition with two-layer neural networks. We first establish a sharp expressivity gap: sine MLPs admit width-$2$ exact realizations for any fixed length $m$ and, with bias, width-$2$ exact realizations uniformly over all lengths. In contrast, the width of ReLU networks must scale linearly with $m$ to interpolate, and they cannot simultaneously fit two lengths with different residues modulo $p$. We then provide a novel Natarajan-dimension generalization bound for sine networks, yielding nearly optimal sample complexity $\widetilde{\mathcal{O}}(p)$ for ERM over constant-width sine networks. We also derive width-independent, margin-based generalization for sine networks in the overparametrized regime and validate it. Empirically, sine networks generalize consistently better than ReLU networks across regimes and exhibit strong length extrapolation.
Similar Papers
From Taylor Series to Fourier Synthesis: The Periodic Linear Unit
Machine Learning (CS)
Makes AI smarter with fewer computer parts.
Sinusoidal Approximation Theorem for Kolmogorov-Arnold Networks
Machine Learning (Stat)
Makes AI learn better using wavy math.
From Taylor Series to Fourier Synthesis: The Periodic Linear Unit
Machine Learning (CS)
Makes smart computer brains learn much faster.