Simplifying Graph Convolutional Networks with Redundancy-Free Neighbors
By: Jielong Lu , Zhihao Wu , Zhiling Cai and more
Potential Business Impact:
Fixes computer learning to understand complex connections.
In recent years, Graph Convolutional Networks (GCNs) have gained popularity for their exceptional ability to process graph-structured data. Existing GCN-based approaches typically employ a shallow model architecture due to the over-smoothing phenomenon. Current approaches to mitigating over-smoothing primarily involve adding supplementary components to GCN architectures, such as residual connections and random edge-dropping strategies. However, these improvements toward deep GCNs have achieved only limited success. In this work, we analyze the intrinsic message passing mechanism of GCNs and identify a critical issue: messages originating from high-order neighbors must traverse through low-order neighbors to reach the target node. This repeated reliance on low-order neighbors leads to redundant information aggregation, a phenomenon we term over-aggregation. Our analysis demonstrates that over-aggregation not only introduces significant redundancy but also serves as the fundamental cause of over-smoothing in GCNs.
Similar Papers
Statistical physics analysis of graph neural networks: Approaching optimality in the contextual stochastic block model
Disordered Systems and Neural Networks
Makes computers understand complex connections better.
The Oversmoothing Fallacy: A Misguided Narrative in GNN Research
Machine Learning (CS)
Makes computer networks learn better, deeper, and faster.
ScaleGNN: Towards Scalable Graph Neural Networks via Adaptive High-order Neighboring Feature Fusion
Machine Learning (CS)
Makes computer learning on big networks faster, better.