Logic Gate Neural Networks are Good for Verification
By: Fabian Kresse , Emily Yu , Christoph H. Lampert and more
Potential Business Impact:
Makes AI easier to check for mistakes.
Learning-based systems are increasingly deployed across various domains, yet the complexity of traditional neural networks poses significant challenges for formal verification. Unlike conventional neural networks, learned Logic Gate Networks (LGNs) replace multiplications with Boolean logic gates, yielding a sparse, netlist-like architecture that is inherently more amenable to symbolic verification, while still delivering promising performance. In this paper, we introduce a SAT encoding for verifying global robustness and fairness in LGNs. We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.
Similar Papers
Mind the Gap: Removing the Discretization Gap in Differentiable Logic Gate Networks
Machine Learning (CS)
Makes smart computer pictures learn much faster.
Logic Tensor Network-Enhanced Generative Adversarial Network
Machine Learning (CS)
Makes AI create pictures that follow rules.
A Method for Optimizing Connections in Differentiable Logic Gate Networks
Machine Learning (CS)
Makes computers learn with fewer logic parts.