Topologically-Stabilized Graph Neural Networks: Empirical Robustness Across Domains
By: Jelena Losic
Graph Neural Networks (GNNs) have become the standard for graph representation learning but remain vulnerable to structural perturbations. We propose a novel framework that integrates persistent homology features with stability regularization to enhance robustness. Building on the stability theorems of persistent homology \cite{cohen2007stability}, our method combines GIN architectures with multi-scale topological features extracted from persistence images, enforced by Hiraoka-Kusano-inspired stability constraints. Across six diverse datasets spanning biochemical, social, and collaboration networks , our approach demonstrates exceptional robustness to edge perturbations while maintaining competitive accuracy. Notably, we observe minimal performance degradation (0-4\% on most datasets) under perturbation, significantly outperforming baseline stability. Our work provides both a theoretically-grounded and empirically-validated approach to robust graph learning that aligns with recent advances in topological regularization
Similar Papers
Exploring Topological Bias in Heterogeneous Graph Neural Networks
Machine Learning (CS)
Fixes computer learning mistakes in complex networks.
Persistent Homology-induced Graph Ensembles for Time Series Regressions
Machine Learning (CS)
Finds hidden patterns in data for better predictions.
On the Stability of Graph Convolutional Neural Networks: A Probabilistic Perspective
Machine Learning (CS)
Makes smart computer graphs more reliable.