The Easy Path to Robustness: Coreset Selection using Sample Hardness
By: Pranav Ramesh , Arjun Roy , Deepak Ravikumar and more
Potential Business Impact:
Makes AI smarter and safer from tricks.
Designing adversarially robust models from a data-centric perspective requires understanding which input samples are most crucial for learning resilient features. While coreset selection provides a mechanism for efficient training on data subsets, current algorithms are designed for clean accuracy and fall short in preserving robustness. To address this, we propose a framework linking a sample's adversarial vulnerability to its \textit{hardness}, which we quantify using the average input gradient norm (AIGN) over training. We demonstrate that \textit{easy} samples (with low AIGN) are less vulnerable and occupy regions further from the decision boundary. Leveraging this insight, we present EasyCore, a coreset selection algorithm that retains only the samples with low AIGN for training. We empirically show that models trained on EasyCore-selected data achieve significantly higher adversarial accuracy than those trained with competing coreset methods under both standard and adversarial training. As AIGN is a model-agnostic dataset property, EasyCore is an efficient and widely applicable data-centric method for improving adversarial robustness. We show that EasyCore achieves up to 7\% and 5\% improvement in adversarial accuracy under standard training and TRADES adversarial training, respectively, compared to existing coreset methods.
Similar Papers
Stable Coresets via Posterior Sampling: Aligning Induced and Full Loss Landscapes
Machine Learning (CS)
Trains computers faster and better with less data.
Deterministic Coreset Construction via Adaptive Sensitivity Trimming
Machine Learning (Stat)
Makes computer learning faster and more accurate.
Coresets for Clustering Under Stochastic Noise
Machine Learning (CS)
Cleans messy data for better computer learning.