Solving Oversmoothing in GNNs via Nonlocal Message Passing: Algebraic Smoothing and Depth Scalability
By: Weiqi Guan, Junlin He
Potential Business Impact:
Makes computer brains learn much more, much deeper.
The relationship between Layer Normalization (LN) placement and the oversmoothing phenomenon remains underexplored. We identify a critical dilemma: Pre-LN architectures avoid oversmoothing but suffer from the curse of depth, while Post-LN architectures bypass the curse of depth but experience oversmoothing. To resolve this, we propose a new method based on Post-LN that induces algebraic smoothing, preventing oversmoothing without the curse of depth. Empirical results across five benchmarks demonstrate that our approach supports deeper networks (up to 256 layers) and improves performance, requiring no additional parameters. Key contributions: Theoretical Characterization: Analysis of LN dynamics and their impact on oversmoothing and the curse of depth. A Principled Solution: A parameter-efficient method that induces algebraic smoothing and avoids oversmoothing and the curse of depth. Empirical Validation: Extensive experiments showing the effectiveness of the method in deeper GNNs.
Similar Papers
Solving Over-Smoothing in GNNs via Nonlocal Message Passing: Algebraic Smoothing and Depth Scalability
Machine Learning (CS)
Makes computer learning models deeper and better.
The Oversmoothing Fallacy: A Misguided Narrative in GNN Research
Machine Learning (CS)
Makes computer networks learn better, deeper, and faster.
Simplifying Graph Convolutional Networks with Redundancy-Free Neighbors
Machine Learning (CS)
Fixes computer learning to understand complex connections.