Proximal Regret and Proximal Correlated Equilibria: A New Tractable Solution Concept for Online Learning and Games
By: Yang Cai , Constantinos Daskalakis , Haipeng Luo and more
Potential Business Impact:
Helps computers learn faster in games.
Learning and computation of equilibria are central problems in game theory, theory of computation, and artificial intelligence. In this work, we introduce proximal regret, a new notion of regret based on proximal operators that lies strictly between external and swap regret. When every player employs a no-proximal-regret algorithm in a general convex game, the empirical distribution of play converges to proximal correlated equilibria (PCE), a refinement of coarse correlated equilibria. Our framework unifies several emerging notions in online learning and game theory-such as gradient equilibrium and semicoarse correlated equilibrium-and introduces new ones. Our main result shows that the classic Online Gradient Descent (GD) algorithm achieves an optimal $O(\sqrt{T})$ bound on proximal regret, revealing that GD, without modification, minimizes a stronger regret notion than external regret. This provides a new explanation for the empirically superior performance of gradient descent in online learning and games. We further extend our analysis to Mirror Descent in the Bregman setting and to Optimistic Gradient Descent, which yields faster convergence in smooth convex games.
Similar Papers
Proximal Regret and Proximal Correlated Equilibria: A New Tractable Solution Concept for Online Learning and Games
CS and Game Theory
Makes computer games play fairer and learn faster.
Actively Learning to Coordinate in Convex Games via Approximate Correlated Equilibrium
CS and Game Theory
Helps a traffic manager guide cars better.
Approximating Nash Equilibria in General-Sum Games via Meta-Learning
CS and Game Theory
Helps games find fair winning strategies faster.