From Snow to Rain: Evaluating Robustness, Calibration, and Complexity of Model-Based Robust Training
By: Josué Martínez-Martínez , Olivia Brown , Giselle Zeno and more
Robustness to natural corruptions remains a critical challenge for reliable deep learning, particularly in safety-sensitive domains. We study a family of model-based training approaches that leverage a learned nuisance variation model to generate realistic corruptions, as well as new hybrid strategies that combine random coverage with adversarial refinement in nuisance space. Using the Challenging Unreal and Real Environments for Traffic Sign Recognition dataset (CURE-TSR), with Snow and Rain corruptions, we evaluate accuracy, calibration, and training complexity across corruption severities. Our results show that model-based methods consistently outperform baselines Vanilla, Adversarial Training, and AugMix baselines, with model-based adversarial training providing the strongest robustness under across all corruptions but at the expense of higher computation and model-based data augmentation achieving comparable robustness with $T$ less computational complexity without incurring a statistically significant drop in performance. These findings highlight the importance of learned nuisance models for capturing natural variability, and suggest a promising path toward more resilient and calibrated models under challenging conditions.
Similar Papers
Towards Trustworthy Wi-Fi Sensing: Systematic Evaluation of Deep Learning Model Robustness to Adversarial Attacks
Machine Learning (CS)
Makes wireless sensing safer from hacking.
Deepfake detectors are DUMB: A benchmark to assess adversarial training robustness under transferability constraints
CV and Pattern Recognition
Makes fake videos harder to spot by computers.
Are Synthetic Corruptions A Reliable Proxy For Real-World Corruptions?
CV and Pattern Recognition
Tests computer vision with fake weather, not real.