Finite-Agent Stochastic Differential Games on Large Graphs: II. Graph-Based Architectures
By: Ruimeng Hu, Jihao Long, Haosheng Zhou
Potential Business Impact:
Helps computers solve complex games faster.
We propose a novel neural network architecture, called Non-Trainable Modification (NTM), for computing Nash equilibria in stochastic differential games (SDGs) on graphs. These games model a broad class of graph-structured multi-agent systems arising in finance, robotics, energy, and social dynamics, where agents interact locally under uncertainty. The NTM architecture imposes a graph-guided sparsification on feedforward neural networks, embedding fixed, non-trainable components aligned with the underlying graph topology. This design enhances interpretability and stability, while significantly reducing the number of trainable parameters in large-scale, sparse settings. We theoretically establish a universal approximation property for NTM in static games on graphs and numerically validate its expressivity and robustness through supervised learning tasks. Building on this foundation, we incorporate NTM into two state-of-the-art game solvers, Direct Parameterization and Deep BSDE, yielding their sparse variants (NTM-DP and NTM-DBSDE). Numerical experiments on three SDGs across various graph structures demonstrate that NTM-based methods achieve performance comparable to their fully trainable counterparts, while offering improved computational efficiency.
Similar Papers
A new architecture of high-order deep neural networks that learn martingales
Machine Learning (CS)
Makes computer trading models more accurate.
Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics
Machine Learning (CS)
Makes AI games find fair wins for everyone.
Dynamic Graph Structure Estimation for Learning Multivariate Point Process using Spiking Neural Networks
Machine Learning (CS)
Learns how events connect to predict future happenings.