Score: 1

Position: Don't be Afraid of Over-Smoothing And Over-Squashing

Published: January 12, 2026 | arXiv ID: 2601.07419v1

By: Niklas Kormann, Benjamin Doerr, Johannes F. Lutzeyer

Potential Business Impact:

Makes computer learning on networks much better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Over-smoothing and over-squashing have been extensively studied in the literature on Graph Neural Networks (GNNs) over the past years. We challenge this prevailing focus in GNN research, arguing that these phenomena are less critical for practical applications than assumed. We suggest that performance decreases often stem from uninformative receptive fields rather than over-smoothing. We support this position with extensive experiments on several standard benchmark datasets, demonstrating that accuracy and over-smoothing are mostly uncorrelated and that optimal model depths remain small even with mitigation techniques, thus highlighting the negligible role of over-smoothing. Similarly, we challenge that over-squashing is always detrimental in practical applications. Instead, we posit that the distribution of relevant information over the graph frequently factorises and is often localised within a small k-hop neighbourhood, questioning the necessity of jointly observing entire receptive fields or engaging in an extensive search for long-range interactions. The results of our experiments show that architectural interventions designed to mitigate over-squashing fail to yield significant performance gains. This position paper advocates for a paradigm shift in theoretical research, urging a diligent analysis of learning tasks and datasets using statistics that measure the underlying distribution of label-relevant information to better understand their localisation and factorisation.

Country of Origin
🇫🇷 France

Page Count
21 pages

Category
Statistics:
Machine Learning (Stat)