Probabilistic Robustness for Free? Revisiting Training via a Benchmark
By: Yi Zhang , Zheng Wang , Chen Zhen and more
Potential Business Impact:
Tests computer smarts against tricky mistakes.
Deep learning models are notoriously vulnerable to imperceptible perturbations. Most existing research centers on adversarial robustness (AR), which evaluates models under worst-case scenarios by examining the existence of deterministic adversarial examples (AEs). In contrast, probabilistic robustness (PR) adopts a statistical perspective, measuring the probability that predictions remain correct under stochastic perturbations. While PR is widely regarded as a practical complement to AR, dedicated training methods for improving PR are still relatively underexplored, albeit with emerging progress. Among the few PR-targeted training methods, we identify three limitations: i non-comparable evaluation protocols; ii limited comparisons to strong AT baselines despite anecdotal PR gains from AT; and iii no unified framework to compare the generalization of these methods. Thus, we introduce PRBench, the first benchmark dedicated to evaluating improvements in PR achieved by different robustness training methods. PRBench empirically compares most common AT and PR-targeted training methods using a comprehensive set of metrics, including clean accuracy, PR and AR performance, training efficiency, and generalization error (GE). We also provide theoretical analysis on the GE of PR performance across different training methods. Main findings revealed by PRBench include: AT methods are more versatile than PR-targeted training methods in terms of improving both AR and PR performance across diverse hyperparameter settings, while PR-targeted training methods consistently yield lower GE and higher clean accuracy. A leaderboard comprising 222 trained models across 7 datasets and 10 model architectures is publicly available at https://tmpspace.github.io/PRBenchLeaderboard/.
Similar Papers
Non-Parametric Probabilistic Robustness: A Conservative Metric with Optimized Perturbation Distributions
CV and Pattern Recognition
Makes AI more trustworthy with unknown errors.
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes AI easier to trick with fake images.
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes computer "eyes" easier for hackers to trick.