Robust Dataset Distillation by Matching Adversarial Trajectories
By: Wei Lai , Tianyu Ding , ren dongdong and more
Potential Business Impact:
Makes AI models safer from tricky attacks.
Dataset distillation synthesizes compact datasets that enable models to achieve performance comparable to training on the original large-scale datasets. However, existing distillation methods overlook the robustness of the model, resulting in models that are vulnerable to adversarial attacks when trained on distilled data. To address this limitation, we introduce the task of ``robust dataset distillation", a novel paradigm that embeds adversarial robustness into the synthetic datasets during the distillation process. We propose Matching Adversarial Trajectories (MAT), a method that integrates adversarial training into trajectory-based dataset distillation. MAT incorporates adversarial samples during trajectory generation to obtain robust training trajectories, which are then used to guide the distillation process. As experimentally demonstrated, even through natural training on our distilled dataset, models can achieve enhanced adversarial robustness while maintaining competitive accuracy compared to existing distillation methods. Our work highlights robust dataset distillation as a new and important research direction and provides a strong baseline for future research to bridge the gap between efficient training and adversarial robustness.
Similar Papers
TGDD: Trajectory Guided Dataset Distillation with Balanced Distribution
CV and Pattern Recognition
Makes computer learning faster and better.
Long-tailed Adversarial Training with Self-Distillation
CV and Pattern Recognition
Helps AI learn better from rare examples.
Towards Class-wise Fair Adversarial Training via Anti-Bias Soft Label Distillation
CV and Pattern Recognition
Makes AI fair by teaching it to protect all information.