SDP-CROWN: Efficient Bound Propagation for Neural Network Verification with Tightness of Semidefinite Programming
By: Hong-Ming Chiu , Hao Chen , Huan Zhang and more
Potential Business Impact:
Makes AI safer by checking its decisions.
Neural network verifiers based on linear bound propagation scale impressively to massive models but can be surprisingly loose when neuron coupling is crucial. Conversely, semidefinite programming (SDP) verifiers capture inter-neuron coupling naturally, but their cubic complexity restricts them to only small models. In this paper, we propose SDP-CROWN, a novel hybrid verification framework that combines the tightness of SDP relaxations with the scalability of bound-propagation verifiers. At the core of SDP-CROWN is a new linear bound, derived via SDP principles, that explicitly captures $\ell_{2}$-norm-based inter-neuron coupling while adding only one extra parameter per layer. This bound can be integrated seamlessly into any linear bound-propagation pipeline, preserving the inherent scalability of such methods yet significantly improving tightness. In theory, we prove that our inter-neuron bound can be up to a factor of $\sqrt{n}$ tighter than traditional per-neuron bounds. In practice, when incorporated into the state-of-the-art $\alpha$-CROWN verifier, we observe markedly improved verification performance on large models with up to 65 thousand neurons and 2.47 million parameters, achieving tightness that approaches that of costly SDP-based methods.
Similar Papers
Interior-Point Vanishing Problem in Semidefinite Relaxations for Neural Network Verification
Machine Learning (CS)
Fixes AI that checks computer brains.
Lipschitz-Based Robustness Certification for Recurrent Neural Networks via Convex Relaxation
Systems and Control
Makes AI safer for important jobs.
Clip-and-Verify: Linear Constraint-Driven Domain Clipping for Accelerating Neural Network Verification
Machine Learning (CS)
Makes AI smarter and safer by checking its work.