Floating-Point Neural Network Verification at the Software Level
By: Edoardo Manino , Bruno Farias , Rafael Sá Menezes and more
Potential Business Impact:
Checks if AI code is safe for important jobs.
The behaviour of neural network components must be proven correct before deployment in safety-critical systems. Unfortunately, existing neural network verification techniques cannot certify the absence of faults at the software level. In this paper, we show how to specify and verify that neural networks are safe, by explicitly reasoning about their floating-point implementation. In doing so, we construct NeuroCodeBench 2.0, a benchmark comprising 912 neural network verification examples that cover activation functions, common layers, and full neural networks of up to 170K parameters. Our verification suite is written in plain C and is compatible with the format of the International Competition on Software Verification (SV-COMP). Thanks to it, we can conduct the first rigorous evaluation of eight state-of-the-art software verifiers on neural network code. The results show that existing automated verification tools can correctly solve an average of 11% of our benchmark, while producing around 3% incorrect verdicts. At the same time, a historical analysis reveals that the release of our benchmark has already had a significantly positive impact on the latter.
Similar Papers
No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks
Machine Learning (CS)
Makes sure AI systems are safe to use.
Verifying rich robustness properties for neural networks
Logic in Computer Science
Makes AI decisions more trustworthy and reliable.
Algorithms and data structures for automatic precision estimation of neural networks
Data Structures and Algorithms
Fixes computer "brain" mistakes for better results.