Proof Minimization in Neural Network Verification
By: Omri Isac , Idan Refaeli , Haoze Wu and more
Potential Business Impact:
Makes AI safety checks smaller and faster.
The widespread adoption of deep neural networks (DNNs) requires efficient techniques for verifying their safety. DNN verifiers are complex tools, which might contain bugs that could compromise their soundness and undermine the reliability of the verification process. This concern can be mitigated using proofs: artifacts that are checkable by an external and reliable proof checker, and which attest to the correctness of the verification process. However, such proofs tend to be extremely large, limiting their use in many scenarios. In this work, we address this problem by minimizing proofs of unsatisfiability produced by DNN verifiers. We present algorithms that remove facts which were learned during the verification process, but which are unnecessary for the proof itself. Conceptually, our method analyzes the dependencies among facts used to deduce UNSAT, and removes facts that did not contribute. We then further minimize the proof by eliminating remaining unnecessary dependencies, using two alternative procedures. We implemented our algorithms on top of a proof producing DNN verifier, and evaluated them across several benchmarks. Our results show that our best-performing algorithm reduces proof size by 37%-82% and proof checking time by 30%-88%, while introducing a runtime overhead of 7%-20% to the verification process itself.
Similar Papers
Proof-Driven Clause Learning in Neural Network Verification
Logic in Computer Science
Checks if AI makes safe decisions.
Abstraction-Based Proof Production in Formal Verification of Neural Networks
Logic in Computer Science
Makes AI trustworthy by checking its work.
Attack logics, not outputs: Towards efficient robustification of deep neural networks by falsifying concept-based properties
Cryptography and Security
Makes AI understand things more logically and safely.