Just-In-Time Piecewise-Linear Semantics for ReLU-type Networks
By: Hongyi Duan , Haoyang Liu , Jian'an Zhang and more
Potential Business Impact:
Finds errors in AI models quickly and surely.
We present a JIT PL semantics for ReLU-type networks that compiles models into a guarded CPWL transducer with shared guards. The system adds hyperplanes only when operands are affine on the current cell, maintains global lower/upper envelopes, and uses a budgeted branch-and-bound. We obtain anytime soundness, exactness on fully refined cells, monotone progress, guard-linear complexity (avoiding global $\binom{k}{2}$), dominance pruning, and decidability under finite refinement. The shared carrier supports region extraction, decision complexes, Jacobians, exact/certified Lipschitz, LP/SOCP robustness, and maximal causal influence. A minimal prototype returns certificates or counterexamples with cost proportional to visited subdomains.
Similar Papers
Better Neural Network Expressivity: Subdividing the Simplex
Machine Learning (CS)
Makes computers learn faster with fewer steps.
Constraining the outputs of ReLU neural networks
Algebraic Geometry
Maps how computer brains learn to think.
Parameterized Hardness of Zonotope Containment and Neural Network Verification
Computational Complexity
Makes AI harder to check for mistakes.