Homomorphism Expressivity of Spectral Invariant Graph Neural Networks
By: Jingchu Gai , Yiheng Du , Bohang Zhang and more
Potential Business Impact:
Makes computers understand complex patterns in data better.
Graph spectra are an important class of structural features on graphs that have shown promising results in enhancing Graph Neural Networks (GNNs). Despite their widespread practical use, the theoretical understanding of the power of spectral invariants -- particularly their contribution to GNNs -- remains incomplete. In this paper, we address this fundamental question through the lens of homomorphism expressivity, providing a comprehensive and quantitative analysis of the expressive power of spectral invariants. Specifically, we prove that spectral invariant GNNs can homomorphism-count exactly a class of specific tree-like graphs which we refer to as parallel trees. We highlight the significance of this result in various contexts, including establishing a quantitative expressiveness hierarchy across different architectural variants, offering insights into the impact of GNN depth, and understanding the subgraph counting capabilities of spectral invariant GNNs. In particular, our results significantly extend Arvind et al. (2024) and settle their open questions. Finally, we generalize our analysis to higher-order GNNs and answer an open question raised by Zhang et al. (2024).
Similar Papers
Spectral Graph Neural Networks are Incomplete on Graphs with a Simple Spectrum
Machine Learning (CS)
Makes AI better at understanding graph-like data.
Enhancing Spectral Graph Neural Networks with LLM-Predicted Homophily
Machine Learning (CS)
Helps computers understand complex data better.
Adaptive Branch Specialization in Spectral-Spatial Graph Neural Networks for Certified Robustness
Machine Learning (CS)
Makes AI smarter and harder to trick.