Score: 0

Robust Dataset Distillation by Matching Adversarial Trajectories

Published: March 15, 2025 | arXiv ID: 2503.12069v1

By: Wei Lai , Tianyu Ding , ren dongdong and more

Potential Business Impact:

Makes AI models safer from tricky attacks.

Business Areas:
A/B Testing Data and Analytics

Dataset distillation synthesizes compact datasets that enable models to achieve performance comparable to training on the original large-scale datasets. However, existing distillation methods overlook the robustness of the model, resulting in models that are vulnerable to adversarial attacks when trained on distilled data. To address this limitation, we introduce the task of ``robust dataset distillation", a novel paradigm that embeds adversarial robustness into the synthetic datasets during the distillation process. We propose Matching Adversarial Trajectories (MAT), a method that integrates adversarial training into trajectory-based dataset distillation. MAT incorporates adversarial samples during trajectory generation to obtain robust training trajectories, which are then used to guide the distillation process. As experimentally demonstrated, even through natural training on our distilled dataset, models can achieve enhanced adversarial robustness while maintaining competitive accuracy compared to existing distillation methods. Our work highlights robust dataset distillation as a new and important research direction and provides a strong baseline for future research to bridge the gap between efficient training and adversarial robustness.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition