Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation
By: Luca Marzari, Isabella Mastroeni, Alessandro Farinelli
Potential Business Impact:
Checks AI safety more precisely.
Traditional methods for formal verification (FV) of deep neural networks (DNNs) are constrained by a binary encoding of safety properties, where a model is classified as either safe or unsafe (robust or not robust). This binary encoding fails to capture the nuanced safety levels within a model, often resulting in either overly restrictive or too permissive requirements. In this paper, we introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs, providing a more granular analysis of the safety aspect for a given DNN. Crucially, by leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the FV process, requiring the same (in the worst case) or even potentially less computational effort than the traditional binary verification approach. Specifically, we demonstrate how this formulation allows rank adversarial inputs according to their abstract safety level violation, offering a more detailed evaluation of the model's safety and robustness. Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches that employ abstract interpretation for robustness verification, complexity analysis of the novel problem introduced, and an empirical evaluation considering both a complex deep reinforcement learning task (based on Habitat 3.0) and standard DNN-Verification benchmarks.
Similar Papers
Abstraction-Based Proof Production in Formal Verification of Neural Networks
Logic in Computer Science
Makes AI trustworthy by checking its work.
Scenario-based Compositional Verification of Autonomous Systems with Neural Perception
Machine Learning (CS)
Makes self-driving cars safer in changing weather.
Verification-Guided Falsification for Safe RL via Explainable Abstraction and Risk-Aware Exploration
Artificial Intelligence
Makes robots safer by checking their actions.