BaB-prob: Branch and Bound with Preactivation Splitting for Probabilistic Verification of Neural Networks
By: Fangji Wang, Panagiotis Tsiotras
Potential Business Impact:
Checks if AI makes the right choices.
Branch-and-bound with preactivation splitting has been shown highly effective for deterministic verification of neural networks. In this paper, we extend this framework to the probabilistic setting. We propose BaB-prob that iteratively divides the original problem into subproblems by splitting preactivations and leverages linear bounds computed by linear bound propagation to bound the probability for each subproblem. We prove soundness and completeness of BaB-prob for feedforward-ReLU neural networks. Furthermore, we introduce the notion of uncertainty level and design two efficient strategies for preactivation splitting, yielding BaB-prob-ordered and BaB+BaBSR-prob. We evaluate BaB-prob on untrained networks, MNIST and CIFAR-10 models, respectively, and VNN-COMP 2025 benchmarks. Across these settings, our approach consistently outperforms state-of-the-art approaches in medium- to high-dimensional input problems.
Similar Papers
Adaptive Branch-and-Bound Tree Exploration for Neural Network Verification
Machine Learning (CS)
Finds mistakes in smart computer brains faster.
Learning to Split: A Reinforcement-Learning-Guided Splitting Heuristic for Neural Network Verification
Logic in Computer Science
Teaches computers to check AI faster.
Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation
Machine Learning (CS)
Makes AI safer by checking its decisions.