Score: 0

Stagewise Reinforcement Learning and the Geometry of the Regret Landscape

Published: January 12, 2026 | arXiv ID: 2601.07524v1

By: Chris Elliott , Einar Urdshals , David Quarel and more

Potential Business Impact:

Teaches computers to learn better and faster.

Business Areas:
Quantum Computing Science and Engineering

Singular learning theory characterizes Bayesian learning as an evolving tradeoff between accuracy and complexity, with transitions between qualitatively different solutions as sample size increases. We extend this theory to deep reinforcement learning, proving that the concentration of the generalized posterior over policies is governed by the local learning coefficient (LLC), an invariant of the geometry of the regret function. This theory predicts that Bayesian phase transitions in reinforcement learning should proceed from simple policies with high regret to complex policies with low regret. We verify this prediction empirically in a gridworld environment exhibiting stagewise policy development: phase transitions over SGD training manifest as "opposing staircases" where regret decreases sharply while the LLC increases. Notably, the LLC detects phase transitions even when estimated on a subset of states where the policies appear identical in terms of regret, suggesting it captures changes in the underlying algorithm rather than just performance.

Page Count
50 pages

Category
Computer Science:
Machine Learning (CS)