Strengthening the Internal Adversarial Robustness in Lifted Neural Networks
By: Christopher Zach
Potential Business Impact:
Makes computer brains stronger against mistakes.
Lifted neural networks (i.e. neural architectures explicitly optimizing over respective network potentials to determine the neural activities) can be combined with a type of adversarial training to gain robustness for internal as well as input layers, in addition to improved generalization performance. In this work we first investigate how adversarial robustness in this framework can be further strengthened by solely modifying the training loss. In a second step we fix some remaining limitations and arrive at a novel training loss for lifted neural networks, that combines targeted and untargeted adversarial perturbations.
Similar Papers
Geometric origin of adversarial vulnerability in deep learning
Machine Learning (CS)
Makes AI smarter and harder to trick.
Algorithms for Adversarially Robust Deep Learning
Machine Learning (CS)
Makes AI safer from tricks and mistakes.
Narrowing Class-Wise Robustness Gaps in Adversarial Training
CV and Pattern Recognition
Makes AI better at guessing, even with tricky data.