No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks
By: Attila Szász, Balázs Bánhelyi, Márk Jelasity
Potential Business Impact:
Makes sure AI systems are safe to use.
The ultimate goal of verification is to guarantee the safety of deployed neural networks. Here, we claim that all the state-of-the-art verifiers we are aware of fail to reach this goal. Our key insight is that theoretical soundness (bounding the full-precision output while computing with floating point) does not imply practical soundness (bounding the floating point output in a potentially stochastic environment). We prove this observation for the approaches that are currently used to achieve provable theoretical soundness, such as interval analysis and its variants. We also argue that achieving practical soundness is significantly harder computationally. We support our claims empirically as well by evaluating several well-known verification methods. To mislead the verifiers, we create adversarial networks that detect and exploit features of the deployment environment, such as the order and precision of floating point operations. We demonstrate that all the tested verifiers are vulnerable to our new deployment-specific attacks, which proves that they are not practically sound.
Similar Papers
Floating-Point Neural Network Verification at the Software Level
Software Engineering
Checks if AI code is safe for important jobs.
Automated Verification of Soundness of DNN Certifiers
Programming Languages
Makes AI trustworthy for important jobs.
Abstraction-Based Proof Production in Formal Verification of Neural Networks
Logic in Computer Science
Makes AI trustworthy by checking its work.