Expressive Power of Graph Transformers via Logic
By: Veeti Ahvonen , Maurice Funk , Damian Heiman and more
Potential Business Impact:
Helps computers understand connections in data better.
Transformers are the basis of modern large language models, but relatively little is known about their precise expressive power on graphs. We study the expressive power of graph transformers (GTs) by Dwivedi and Bresson (2020) and GPS-networks by Ramp\'asek et al. (2022), both under soft-attention and average hard-attention. Our study covers two scenarios: the theoretical setting with real numbers and the more practical case with floats. With reals, we show that in restriction to vertex properties definable in first-order logic (FO), GPS-networks have the same expressive power as graded modal logic (GML) with the global modality. With floats, GPS-networks turn out to be equally expressive as GML with the counting global modality. The latter result is absolute, not restricting to properties definable in a background logic. We also obtain similar characterizations for GTs in terms of propositional logic with the global modality (for reals) and the counting global modality (for floats).
Similar Papers
Plain Transformers Can be Powerful Graph Learners
Machine Learning (CS)
Makes computers understand connections in data better.
The Logical Expressiveness of Temporal GNNs via Two-Dimensional Product Logics
Machine Learning (CS)
Teaches computers to understand changing information over time.
Aggregate-Combine-Readout GNNs Are More Expressive Than Logic C2
Artificial Intelligence
Makes computers understand complex data patterns better.