Bridging Efficiency and Safety: Formal Verification of Neural Networks with Early Exits
By: Yizhak Yisrael Elboher , Avraham Raviv , Amihay Elboher and more
Ensuring the safety and efficiency of AI systems is a central goal of modern research. Formal verification provides guarantees of neural network robustness, while early exits improve inference efficiency by enabling intermediate predictions. Yet verifying networks with early exits introduces new challenges due to their conditional execution paths. In this work, we define a robustness property tailored to early exit architectures and show how off-the-shelf solvers can be used to assess it. We present a baseline algorithm, enhanced with an early stopping strategy and heuristic optimizations that maintain soundness and completeness. Experiments on multiple benchmarks validate our framework's effectiveness and demonstrate the performance gains of the improved algorithm. Alongside the natural inference acceleration provided by early exits, we show that they also enhance verifiability, enabling more queries to be solved in less time compared to standard networks. Together with a robustness analysis, we show how these metrics can help users navigate the inherent trade-off between accuracy and efficiency.
Similar Papers
Verifying rich robustness properties for neural networks
Logic in Computer Science
Makes AI decisions more trustworthy and reliable.
Faster Verified Explanations for Neural Networks
Machine Learning (CS)
Makes AI's decisions understandable and trustworthy.
Floating-Point Neural Network Verification at the Software Level
Software Engineering
Checks if AI code is safe for important jobs.