Proximal Regret and Proximal Correlated Equilibria: A New Tractable Solution Concept for Online Learning and Games
By: Yang Cai , Constantinos Daskalakis , Haipeng Luo and more
Potential Business Impact:
Makes computer games play fairer and learn faster.
Learning and computation of equilibria are central problems in algorithmic game theory. In this work, we introduce proximal regret, a new notion of regret based on proximal operators that lies strictly between external and swap regret. When every player employs a no-proximal-regret algorithm in a general convex game, the empirical distribution of play converges to proximal correlated equilibria (PCE), a refinement of coarse correlated equilibria. Our framework unifies several emerging notions in online learning and game theory -- such as gradient equilibrium and semicoarse correlated equilibrium -- and introduces new ones. Our main result shows that the classic Online Gradient Descent (GD) algorithm achieves an optimal $O(\sqrt{T})$ bound on proximal regret, revealing that GD, without modification, minimizes a stronger regret notion than external regret. This provides a new explanation for the empirically superior performance of gradient descent in online learning and games. We further extend our analysis to Mirror Descent in the Bregman setting and to Optimistic Gradient Descent, which yields faster convergence in smooth convex games.
Similar Papers
Proximal Regret and Proximal Correlated Equilibria: A New Tractable Solution Concept for Online Learning and Games
CS and Game Theory
Helps computers learn faster in games.
Actively Learning to Coordinate in Convex Games via Approximate Correlated Equilibrium
CS and Game Theory
Helps a traffic manager guide cars better.
Scale-Invariant Regret Matching and Online Learning with Optimal Convergence: Bridging Theory and Practice in Zero-Sum Games
CS and Game Theory
Makes game AI learn faster and better.