Abstraction-Based Proof Production in Formal Verification of Neural Networks
By: Yizhak Yisrael Elboher , Omri Isac , Guy Katz and more
Potential Business Impact:
Makes AI trustworthy by checking its work.
Modern verification tools for deep neural networks (DNNs) increasingly rely on abstraction to scale to realistic architectures. In parallel, proof production is becoming a critical requirement for increasing the reliability of DNN verification results. However, current proofproducing verifiers do not support abstraction-based reasoning, creating a gap between scalability and provable guarantees. We address this gap by introducing a novel framework for proof-producing abstraction-based DNN verification. Our approach modularly separates the verification task into two components: (i) proving the correctness of an abstract network, and (ii) proving the soundness of the abstraction with respect to the original DNN. The former can be handled by existing proof-producing verifiers, whereas we propose the first method for generating formal proofs for the latter. This preliminary work aims to enable scalable and trustworthy verification by supporting common abstraction techniques within a formal proof framework.
Similar Papers
Automated Verification of Soundness of DNN Certifiers
Programming Languages
Makes AI trustworthy for important jobs.
Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation
Artificial Intelligence
Checks AI safety more precisely.
Proof Minimization in Neural Network Verification
Logic in Computer Science
Makes AI safety checks smaller and faster.