The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic
By: Bernardo Cuenca Grau, Eva Feng, Przemysław A. Wałęga
Potential Business Impact:
Lets computers understand complex connections better.
Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic (FO), including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO.
Similar Papers
The Logical Expressiveness of Temporal GNNs via Two-Dimensional Product Logics
Machine Learning (CS)
Teaches computers to understand changing information over time.
Lecture Notes on Verifying Graph Neural Networks
Logic in Computer Science
Checks computer programs for mistakes using logic.
Statistical physics analysis of graph neural networks: Approaching optimality in the contextual stochastic block model
Disordered Systems and Neural Networks
Makes computers understand complex connections better.