Aggregate-Combine-Readout GNNs Are More Expressive Than Logic C2
By: Stan P Hauke, Przemysław Andrzej Wałęga
Potential Business Impact:
Makes computers understand complex data patterns better.
In recent years, there has been growing interest in understanding the expressive power of graph neural networks (GNNs) by relating them to logical languages. This research has been been initialised by an influential result of Barcel\'o et al. (2020), who showed that the graded modal logic (or a guarded fragment of the logic C2), characterises the logical expressiveness of aggregate-combine GNNs. As a ``challenging open problem'' they left the question whether full C2 characterises the logical expressiveness of aggregate-combine-readout GNNs. This question has remained unresolved despite several attempts. In this paper, we solve the above open problem by proving that the logical expressiveness of aggregate-combine-readout GNNs strictly exceeds that of C2. This result holds over both undirected and directed graphs. Beyond its implications for GNNs, our work also leads to purely logical insights on the expressive power of infinitary logics.
Similar Papers
The Logical Expressiveness of Temporal GNNs via Two-Dimensional Product Logics
Machine Learning (CS)
Teaches computers to understand changing information over time.
Verifying Graph Neural Networks with Readout is Intractable
Logic in Computer Science
Makes AI safer and smaller for computers.
Sound Logical Explanations for Mean Aggregation Graph Neural Networks
Machine Learning (CS)
Explains how AI learns from connected facts.