Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate
By: Hung Pham , Aiden Ren , Ibrahim Tahir and more
Potential Business Impact:
Makes computers solve hard problems faster.
We can approximate a constraint or an objective function that is uncertain or nonlinear with a neural network that we embed in the optimization model. This approach, which is known as constraint learning, faces the challenge that optimization models with neural network surrogates are harder to solve. Such difficulties have motivated studies on model reformulation, specialized optimization algorithms, and - to a lesser extent - pruning of the embedded networks. In this work, we double down on the use of surrogates by applying network pruning to produce a surrogate of the neural network itself. In the context of using a Mixed-Integer Linear Programming (MILP) solver to verify neural networks, we obtained faster adversarial perturbations for dense neural networks by using sparse surrogates, especially - and surprisingly - if not taking the time to finetune the sparse network to make up for the loss in accuracy. In other words, we show that a pruned network with bad classification performance can still be a good - and more efficient - surrogate.
Similar Papers
Interpretable Deep Neural Network for Modeling Functional Surrogates
Methodology
AI learns complex science problems faster.
Low-rank surrogate modeling and stochastic zero-order optimization for training of neural networks with black-box layers
Machine Learning (CS)
Makes AI learn faster using light and math.
Learning based convex approximation for constrained parametric optimization
Optimization and Control
Solves hard math problems with smart computer brains.