When Flatness Does (Not) Guarantee Adversarial Robustness
By: Nils Philipp Walter , Linara Adilova , Jilles Vreeken and more
Potential Business Impact:
Makes AI less fooled by tricky mistakes.
Despite their empirical success, neural networks remain vulnerable to small, adversarial perturbations. A longstanding hypothesis suggests that flat minima, regions of low curvature in the loss landscape, offer increased robustness. While intuitive, this connection has remained largely informal and incomplete. By rigorously formalizing the relationship, we show this intuition is only partially correct: flatness implies local but not global adversarial robustness. To arrive at this result, we first derive a closed-form expression for relative flatness in the penultimate layer, and then show we can use this to constrain the variation of the loss in input space. This allows us to formally analyze the adversarial robustness of the entire network. We then show that to maintain robustness beyond a local neighborhood, the loss needs to curve sharply away from the data manifold. We validate our theoretical predictions empirically across architectures and datasets, uncovering the geometric structure that governs adversarial vulnerability, and linking flatness to model confidence: adversarial examples often lie in large, flat regions where the model is confidently wrong. Our results challenge simplified views of flatness and provide a nuanced understanding of its role in robustness.
Similar Papers
Understanding Flatness in Generative Models: Its Role and Benefits
CV and Pattern Recognition
Makes AI art more stable and less buggy.
Does Flatness imply Generalization for Logistic Loss in Univariate Two-Layer ReLU Network?
Machine Learning (CS)
Makes computer learning more reliable for some tasks.
A Function Centric Perspective On Flat and Sharp Minima
Machine Learning (CS)
Sharpness can make AI smarter and safer.