Verifying Graph Neural Networks with Readout is Intractable
By: Artem Chernobrovkin , Marco Sälzer , François Schwarzentruber and more
Potential Business Impact:
Makes AI safer and smaller for computers.
We introduce a logical language for reasoning about quantized aggregate-combine graph neural networks with global readout (ACR-GNNs). We provide a logical characterization and use it to prove that verification tasks for quantized GNNs with readout are (co)NEXPTIME-complete. This result implies that the verification of quantized GNNs is computationally intractable, prompting substantial research efforts toward ensuring the safety of GNN-based systems. We also experimentally demonstrate that quantized ACR-GNN models are lightweight while maintaining good accuracy and generalization capabilities with respect to non-quantized models.
Similar Papers
Aggregate-Combine-Readout GNNs Are More Expressive Than Logic C2
Artificial Intelligence
Makes computers understand complex data patterns better.
Lecture Notes on Verifying Graph Neural Networks
Logic in Computer Science
Checks computer programs for mistakes using logic.
Exact Verification of Graph Neural Networks with Incremental Constraint Solving
Machine Learning (CS)
Protects smart computer networks from being tricked.