Information is localized in growing network models
By: Till Hoffmann, Jukka-Pekka Onnela
Mechanistic network models can capture salient characteristics of empirical networks using a small set of domain-specific, interpretable mechanisms. Yet inference remains challenging because the likelihood is often intractable. We show that, for a broad class of growing network models, information about model parameters is localized in the network, i.e., the likelihood can be expressed in terms of small subgraphs. We take a Bayesian perspective to inference and develop neural density estimators (NDEs) to approximate the posterior distribution of model parameters using graph neural networks (GNNs) with limited receptive size, i.e., the GNN can only "see" small subgraphs. We characterize nine growing network models in terms of their localization and demonstrate that localization predictions agree with NDEs on simulated data. Even for non-localized models, NDEs can infer high-fidelity posteriors matching model-specific inference methods at a fraction of the cost. Our findings establish information localization as a fundamental property of network growth, theoretically justifying the analysis of local subgraphs embedded in larger, unobserved networks and the use of GNNs with limited receptive field for likelihood-free inference.
Similar Papers
Approximate Bayesian Inference on Mechanisms of Network Growth and Evolution
Social and Information Networks
Shows how online connections are really made.
Local Information for Global Network Estimation in Latent Space Models
Methodology
Helps understand social networks from a small view.
Towards agent-based-model informed neural networks
Machine Learning (CS)
Teaches computers to understand how groups act.