Score: 1

Learning to Split: A Reinforcement-Learning-Guided Splitting Heuristic for Neural Network Verification

Published: December 11, 2025 | arXiv ID: 2512.10747v1

By: Maya Swisa, Guy Katz

Potential Business Impact:

Teaches computers to check AI faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

State-of-the-art neural network verifiers operate by encoding neural network verification as constraint satisfaction problems. When dealing with standard piecewise-linear activation functions, such as ReLUs, verifiers typically employ branching heuristics that break a complex constraint satisfaction problem into multiple, simpler problems. The verifier's performance depends heavily on the order in which this branching is performed: a poor selection may give rise to exponentially many sub-problem, hampering scalability. Here, we focus on the setting where multiple verification queries must be solved for the same neural network. The core idea is to use past experience to make good branching decisions, expediting verification. We present a reinforcement-learning-based branching heuristic that achieves this, by applying a learning from demonstrations (DQfD) techniques. Our experimental evaluation demonstrates a substantial reduction in average verification time and in the average number of iterations required, compared to modern splitting heuristics. These results highlight the great potential of reinforcement learning in the context of neural network verification.

Country of Origin
🇮🇱 Israel

Page Count
18 pages

Category
Computer Science:
Logic in Computer Science