Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics
By: Deep Patel, Emmanouil-Vasileios Vlatakis-Gkaragkounis
Potential Business Impact:
Makes AI games find fair wins for everyone.
Many emerging applications - such as adversarial training, AI alignment, and robust optimization - can be framed as zero-sum games between neural nets, with von Neumann-Nash equilibria (NE) capturing the desirable system behavior. While such games often involve non-convex non-concave objectives, empirical evidence shows that simple gradient methods frequently converge, suggesting a hidden geometric structure. In this paper, we provide a theoretical framework that explains this phenomenon through the lens of hidden convexity and overparameterization. We identify sufficient conditions - spanning initialization, training dynamics, and network width - that guarantee global convergence to a NE in a broad class of non-convex min-max games. To our knowledge, this is the first such result for games that involve two-layer neural networks. Technically, our approach is twofold: (a) we derive a novel path-length bound for the alternating gradient descent-ascent scheme in min-max games; and (b) we show that the reduction from a hidden convex-concave geometry to two-sided Polyak-Łojasiewicz (PŁ) min-max condition hold with high probability under overparameterization, using tools from random matrix theory.
Similar Papers
Solving Zero-Sum Convex Markov Games
CS and Game Theory
Helps AI learn to win games fairly.
A Parallelizable Approach for Characterizing NE in Zero-Sum Games After a Linear Number of Iterations of Gradient Descent
CS and Game Theory
Solves tough math puzzles faster than before.
A Convexity-dependent Two-Phase Training Algorithm for Deep Neural Networks
Machine Learning (CS)
Makes computer learning faster and more accurate.