On Improving Deep Active Learning with Formal Verification
By: Jonathan Spiegelman, Guy Amir, Guy Katz
Deep Active Learning (DAL) aims to reduce labeling costs in neural-network training by prioritizing the most informative unlabeled samples for annotation. Beyond selecting which samples to label, several DAL approaches further enhance data efficiency by augmenting the training set with synthetic inputs that do not require additional manual labeling. In this work, we investigate how augmenting the training data with adversarial inputs that violate robustness constraints can improve DAL performance. We show that adversarial examples generated via formal verification contribute substantially more than those produced by standard, gradient-based attacks. We apply this extension to multiple modern DAL techniques, as well as to a new technique that we propose, and show that it yields significant improvements in model generalization across standard benchmarks.
Similar Papers
The Role of Active Learning in Modern Machine Learning
Machine Learning (CS)
Makes AI learn better with less data.
ALScope: A Unified Toolkit for Deep Active Learning
Machine Learning (CS)
Tests computer learning on tricky, uneven data.
Calibrated Adversarial Sampling: Multi-Armed Bandit-Guided Generalization Against Unforeseen Attacks
Machine Learning (CS)
Makes AI smarter and safer from tricks.