Towards agent-based-model informed neural networks
By: Nino Antulov-Fantulin
In this article, we present a framework for designing neural networks that remain consistent with the underlying principles of agent-based models. We begin by highlighting the limitations of standard neural differential equations in modeling complex systems, where physical invariants (like energy) are often absent but other constraints (like mass conservation, network locality, bounded rationality) must be enforced. To address this, we introduce Agent-Based-Model informed Neural Networks(ABM-NNs), which leverage restricted graph neural networks and hierarchical decomposition to learn interpretable, structure-preserving dynamics. We validate the framework across three case studies of increasing complexity: (i) a generalized Generalized Lotka--Volterra system, where we recover ground-truth parameters from short trajectories in presence of interventions; (ii) a graph-based SIR contagion model, where our method outperforms state-of-the-art graph learning baselines (GCN, GraphSAGE, Graph Transformer) in out-of-sample forecasting and noise robustness; and (iii) a real-world macroeconomic model of the ten largest economies, where we learn coupled GDP dynamics from empirical data and demonstrate gradient-based counterfactual analysis for policy interventions.
Similar Papers
Automatic Differentiation of Agent-Based Models
Multiagent Systems
Makes computer models of big groups run faster.
Applying Machine Learning for characterizing social networks Agent-based models
Social and Information Networks
Helps understand how social media changes people.
Agentic Neural Networks: Self-Evolving Multi-Agent Systems via Textual Backpropagation
Machine Learning (CS)
Helps AI teams learn and work together better.