Automated Verification of Soundness of DNN Certifiers
By: Avaljot Singh , Yasmin Chandini Sarita , Charith Mendis and more
Potential Business Impact:
Makes AI trustworthy for important jobs.
The uninterpretability of Deep Neural Networks (DNNs) hinders their use in safety-critical applications. Abstract Interpretation-based DNN certifiers provide promising avenues for building trust in DNNs. Unsoundness in the mathematical logic of these certifiers can lead to incorrect results. However, current approaches to ensure their soundness rely on manual, expert-driven proofs that are tedious to develop, limiting the speed of developing new certifiers. Automating the verification process is challenging due to the complexity of verifying certifiers for arbitrary DNN architectures and handling diverse abstract analyses. We introduce ProveSound, a novel verification procedure that automates the soundness verification of DNN certifiers for arbitrary DNN architectures. Our core contribution is the novel concept of a symbolic DNN, using which, ProveSound reduces the soundness property, a universal quantification over arbitrary DNNs, to a tractable symbolic representation, enabling verification with standard SMT solvers. By formalizing the syntax and operational semantics of ConstraintFlow, a DSL for specifying certifiers, ProveSound efficiently verifies both existing and new certifiers, handling arbitrary DNN architectures. Our code is available at https://github.com/uiuc-focal-lab/constraintflow.git
Similar Papers
Abstraction-Based Proof Production in Formal Verification of Neural Networks
Logic in Computer Science
Makes AI trustworthy by checking its work.
No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks
Machine Learning (CS)
Makes sure AI systems are safe to use.
Proof-Driven Clause Learning in Neural Network Verification
Logic in Computer Science
Checks if AI makes safe decisions.