Score: 0

Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics

Published: November 29, 2025 | arXiv ID: 2512.00389v1

By: Deep Patel, Emmanouil-Vasileios Vlatakis-Gkaragkounis

Potential Business Impact:

Makes AI games find fair wins for everyone.

Business Areas:
A/B Testing Data and Analytics

Many emerging applications - such as adversarial training, AI alignment, and robust optimization - can be framed as zero-sum games between neural nets, with von Neumann-Nash equilibria (NE) capturing the desirable system behavior. While such games often involve non-convex non-concave objectives, empirical evidence shows that simple gradient methods frequently converge, suggesting a hidden geometric structure. In this paper, we provide a theoretical framework that explains this phenomenon through the lens of hidden convexity and overparameterization. We identify sufficient conditions - spanning initialization, training dynamics, and network width - that guarantee global convergence to a NE in a broad class of non-convex min-max games. To our knowledge, this is the first such result for games that involve two-layer neural networks. Technically, our approach is twofold: (a) we derive a novel path-length bound for the alternating gradient descent-ascent scheme in min-max games; and (b) we show that the reduction from a hidden convex-concave geometry to two-sided Polyak-Łojasiewicz (PŁ) min-max condition hold with high probability under overparameterization, using tools from random matrix theory.

Country of Origin
🇺🇸 United States

Page Count
56 pages

Category
Computer Science:
Machine Learning (CS)