Learning to Split: A Reinforcement-Learning-Guided Splitting Heuristic for Neural Network Verification
By: Maya Swisa, Guy Katz
Potential Business Impact:
Teaches computers to check AI faster.
State-of-the-art neural network verifiers operate by encoding neural network verification as constraint satisfaction problems. When dealing with standard piecewise-linear activation functions, such as ReLUs, verifiers typically employ branching heuristics that break a complex constraint satisfaction problem into multiple, simpler problems. The verifier's performance depends heavily on the order in which this branching is performed: a poor selection may give rise to exponentially many sub-problem, hampering scalability. Here, we focus on the setting where multiple verification queries must be solved for the same neural network. The core idea is to use past experience to make good branching decisions, expediting verification. We present a reinforcement-learning-based branching heuristic that achieves this, by applying a learning from demonstrations (DQfD) techniques. Our experimental evaluation demonstrates a substantial reduction in average verification time and in the average number of iterations required, compared to modern splitting heuristics. These results highlight the great potential of reinforcement learning in the context of neural network verification.
Similar Papers
BaB-prob: Branch and Bound with Preactivation Splitting for Probabilistic Verification of Neural Networks
Machine Learning (CS)
Checks if AI makes the right choices.
From Solving to Verifying: A Unified Objective for Robust Reasoning in LLMs
Machine Learning (CS)
Helps AI check its own thinking better.
Efficient Neural Clause-Selection Reinforcement
Artificial Intelligence
Teaches computers to prove math problems faster.