PAC-Bayesian Generalization Bounds for Graph Convolutional Networks on Inductive Node Classification
By: Huayi Tang, Yong Liu
Potential Business Impact:
Helps computers learn from changing online connections.
Graph neural networks (GNNs) have achieved remarkable success in processing graph-structured data across various applications. A critical aspect of real-world graphs is their dynamic nature, where new nodes are continually added and existing connections may change over time. Previous theoretical studies, largely based on the transductive learning framework, fail to adequately model such temporal evolution and structural dynamics. In this paper, we presents a PAC-Bayesian theoretical analysis of graph convolutional networks (GCNs) for inductive node classification, treating nodes as dependent and non-identically distributed data points. We derive novel generalization bounds for one-layer GCNs that explicitly incorporate the effects of data dependency and non-stationarity, and establish sufficient conditions under which the generalization gap converges to zero as the number of nodes increases. Furthermore, we extend our analysis to two-layer GCNs, and reveal that it requires stronger assumptions on graph topology to guarantee convergence. This work establishes a theoretical foundation for understanding and improving GNN generalization in dynamic graph environments.
Similar Papers
Statistical physics analysis of graph neural networks: Approaching optimality in the contextual stochastic block model
Disordered Systems and Neural Networks
Makes computers understand complex connections better.
PAC-Bayesian risk bounds for fully connected deep neural network with Gaussian priors
Statistics Theory
Makes smart computer programs learn faster and better.
PAC-Bayesian Reinforcement Learning Trains Generalizable Policies
Machine Learning (CS)
Helps robots learn faster and safer.