Stagewise Reinforcement Learning and the Geometry of the Regret Landscape
By: Chris Elliott , Einar Urdshals , David Quarel and more
Potential Business Impact:
Teaches computers to learn better and faster.
Singular learning theory characterizes Bayesian learning as an evolving tradeoff between accuracy and complexity, with transitions between qualitatively different solutions as sample size increases. We extend this theory to deep reinforcement learning, proving that the concentration of the generalized posterior over policies is governed by the local learning coefficient (LLC), an invariant of the geometry of the regret function. This theory predicts that Bayesian phase transitions in reinforcement learning should proceed from simple policies with high regret to complex policies with low regret. We verify this prediction empirically in a gridworld environment exhibiting stagewise policy development: phase transitions over SGD training manifest as "opposing staircases" where regret decreases sharply while the LLC increases. Notably, the LLC detects phase transitions even when estimated on a subset of states where the policies appear identical in terms of regret, suggesting it captures changes in the underlying algorithm rather than just performance.
Similar Papers
Asymptotically optimal reinforcement learning in Block Markov Decision Processes
Machine Learning (CS)
Teaches robots to learn faster in complex worlds.
Eventually LIL Regret: Almost Sure $\ln\ln T$ Regret for a sub-Gaussian Mixture on Unbounded Data
Machine Learning (CS)
Makes betting smarter, even with wild data.
Tail Distribution of Regret in Optimistic Reinforcement Learning
Machine Learning (CS)
Helps robots learn faster and make fewer mistakes.