When Dynamic Data Selection Meets Data Augmentation
By: Suorong Yang , Peng Ye , Furao Shen and more
Potential Business Impact:
Trains computers faster with less data.
Dynamic data selection aims to accelerate training with lossless performance. However, reducing training data inherently limits data diversity, potentially hindering generalization. While data augmentation is widely used to enhance diversity, it is typically not optimized in conjunction with selection. As a result, directly combining these techniques fails to fully exploit their synergies. To tackle the challenge, we propose a novel online data training framework that, for the first time, unifies dynamic data selection and augmentation, achieving both training efficiency and enhanced performance. Our method estimates each sample's joint distribution of local density and multimodal semantic consistency, allowing for the targeted selection of augmentation-suitable samples while suppressing the inclusion of noisy or ambiguous data. This enables a more significant reduction in dataset size without sacrificing model generalization. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches on various benchmark datasets and architectures, e.g., reducing 50\% training costs on ImageNet-1k with lossless performance. Furthermore, our approach enhances noise resistance and improves model robustness, reinforcing its practical utility in real-world scenarios.
Similar Papers
Autoguided Online Data Curation for Diffusion Model Training
CV and Pattern Recognition
Makes AI create better pictures faster.
Data-Agnostic Augmentations for Unknown Variations: Out-of-Distribution Generalisation in MRI Segmentation
CV and Pattern Recognition
Makes medical scans more accurate for doctors.
Conditional Data Synthesis Augmentation
Methodology
Makes computer learning fair for everyone.