A Logical View of GNN-Style Computation and the Role of Activation Functions
By: Pablo Barceló , Floris Geerts , Matthias Lanzinger and more
We study the numerical and Boolean expressiveness of MPLang, a declarative language that captures the computation of graph neural networks (GNNs) through linear message passing and activation functions. We begin with A-MPLang, the fragment without activation functions, and give a characterization of its expressive power in terms of walk-summed features. For bounded activation functions, we show that (under mild conditions) all eventually constant activations yield the same expressive power - numerical and Boolean - and that it subsumes previously established logics for GNNs with eventually constant activation functions but without linear layers. Finally, we prove the first expressive separation between unbounded and bounded activations in the presence of linear layers: MPLang with ReLU is strictly more powerful for numerical queries than MPLang with eventually constant activation functions, e.g., truncated ReLU. This hinges on subtle interactions between linear aggregation and eventually constant non-linearities, and it establishes that GNNs using ReLU are more expressive than those restricted to eventually constant activations and linear layers.
Similar Papers
Topological Signatures of ReLU Neural Network Activation Patterns
Machine Learning (CS)
Finds patterns in how computer brains learn.
Branching Strategies Based on Subgraph GNNs: A Study on Theoretical Promise versus Practical Reality
Machine Learning (CS)
Makes computers solve hard math problems faster.
Local Control Networks (LCNs): Optimizing Flexibility in Neural Network Data Pattern Capture
Machine Learning (CS)
Lets computers learn better with different "thinking" parts.