Recursive querying of neural networks via weighted structures
By: Martin Grohe , Christoph Standke , Juno Steegmans and more
Potential Business Impact:
Lets computers explain how they learn.
Expressive querying of machine learning models - viewed as a form of intentional data - enables their verification and interpretation using declarative languages, thereby making learned representations of data more accessible. Motivated by the querying of feedforward neural networks, we investigate logics for weighted structures. In the absence of a bound on neural network depth, such logics must incorporate recursion; thereto we revisit the functional fixpoint mechanism proposed by Grädel and Gurevich. We adopt it in a Datalog-like syntax; we extend normal forms for fixpoint logics to weighted structures; and show an equivalent "loose" fixpoint mechanism that allows values of inductively defined weight functions to be overwritten. We propose a "scalar" restriction of functional fixpoint logic, of polynomial-time data complexity, and show it can express all PTIME model-agnostic queries over reduced networks with polynomially bounded weights. In contrast, we show that very simple model-agnostic queries are already NP-complete. Finally, we consider transformations of weighted structures by iterated transductions.
Similar Papers
On the Limits of Hierarchically Embedded Logic in Classical Neural Networks
Artificial Intelligence
AI can't do complex thinking, like counting.
Lecture Notes on Verifying Graph Neural Networks
Logic in Computer Science
Checks computer programs for mistakes using logic.
On Scaling Neurosymbolic Programming through Guided Logical Inference
Artificial Intelligence
Makes smart computer learning faster and more accurate.