A Parallelizable Approach for Characterizing NE in Zero-Sum Games After a Linear Number of Iterations of Gradient Descent
By: Taemin Kim, James P. Bailey
Potential Business Impact:
Solves tough math puzzles faster than before.
We study online optimization methods for zero-sum games, a fundamental problem in adversarial learning in machine learning, economics, and many other domains. Traditional methods approximate Nash equilibria (NE) using either regret-based methods (time-average convergence) or contraction-map-based methods (last-iterate convergence). We propose a new method based on Hamiltonian dynamics in physics and prove that it can characterize the set of NE in a finite (linear) number of iterations of alternating gradient descent in the unbounded setting, modulo degeneracy, a first in online optimization. Unlike standard methods for computing NE, our proposed approach can be parallelized and works with arbitrary learning rates, both firsts in algorithmic game theory. Experimentally, we support our results by showing our approach drastically outperforms standard methods.
Similar Papers
Approximating Nash Equilibria in General-Sum Games via Meta-Learning
CS and Game Theory
Helps games find fair winning strategies faster.
Convergence of Time-Averaged Mean Field Gradient Descent Dynamics for Continuous Multi-Player Zero-Sum Games
Optimization and Control
Helps many players find fair game strategies.
Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics
Machine Learning (CS)
Makes AI games find fair wins for everyone.