Score: 0

Using physics-inspired Singular Learning Theory to understand grokking & other phase transitions in modern neural networks

Published: November 30, 2025 | arXiv ID: 2512.00686v3

By: Anish Lakkapragada

Potential Business Impact:

Explains why smart computer programs learn so well.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Classical statistical inference and learning theory often fail to explain the success of modern neural networks. A key reason is that these models are non-identifiable (singular), violating core assumptions behind PAC bounds and asymptotic normality. Singular learning theory (SLT), a physics-inspired framework grounded in algebraic geometry, has gained popularity for its ability to close this theory-practice gap. In this paper, we empirically study SLT in toy settings relevant to interpretability and phase transitions. First, we understand the SLT free energy $\mathcal{F}_n$ by testing an Arrhenius-style rate hypothesis using both a grokking modulo-arithmetic model and Anthropic's Toy Models of Superposition. Second, we understand the local learning coefficient $λ_α$ by measuring how it scales with problem difficulty across several controlled network families (polynomial regressors, low-rank linear networks, and low-rank autoencoders). Our experiments recover known scaling laws while others yield meaningful deviations from theoretical expectations. Overall, our paper illustrates the many merits of SLT for understanding neural network phase transitions, and poses open research questions for the field.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)