Score: 0

Strengthening the Internal Adversarial Robustness in Lifted Neural Networks

Published: March 10, 2025 | arXiv ID: 2503.07818v1

By: Christopher Zach

Potential Business Impact:

Makes computer brains stronger against mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Lifted neural networks (i.e. neural architectures explicitly optimizing over respective network potentials to determine the neural activities) can be combined with a type of adversarial training to gain robustness for internal as well as input layers, in addition to improved generalization performance. In this work we first investigate how adversarial robustness in this framework can be further strengthened by solely modifying the training loss. In a second step we fix some remaining limitations and arrive at a novel training loss for lifted neural networks, that combines targeted and untargeted adversarial perturbations.

Country of Origin
πŸ‡ΈπŸ‡ͺ Sweden

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)