TGDD: Trajectory Guided Dataset Distillation with Balanced Distribution
By: Fengli Ran , Xiao Pu , Bo Liu and more
Potential Business Impact:
Makes computer learning faster and better.
Dataset distillation compresses large datasets into compact synthetic ones to reduce storage and computational costs. Among various approaches, distribution matching (DM)-based methods have attracted attention for their high efficiency. However, they often overlook the evolution of feature representations during training, which limits the expressiveness of synthetic data and weakens downstream performance. To address this issue, we propose Trajectory Guided Dataset Distillation (TGDD), which reformulates distribution matching as a dynamic alignment process along the model's training trajectory. At each training stage, TGDD captures evolving semantics by aligning the feature distribution between the synthetic and original dataset. Meanwhile, it introduces a distribution constraint regularization to reduce class overlap. This design helps synthetic data preserve both semantic diversity and representativeness, improving performance in downstream tasks. Without additional optimization overhead, TGDD achieves a favorable balance between performance and efficiency. Experiments on ten datasets demonstrate that TGDD achieves state-of-the-art performance, notably a 5.0% accuracy gain on high-resolution benchmarks.
Similar Papers
GeoDM: Geometry-aware Distribution Matching for Dataset Distillation
CV and Pattern Recognition
Makes small data sets work like big ones.
Dataset Distillation for Pre-Trained Self-Supervised Vision Models
CV and Pattern Recognition
Creates small, smart picture sets for AI.
DDTime: Dataset Distillation with Spectral Alignment and Information Bottleneck for Time-Series Forecasting
Machine Learning (CS)
Makes computer predictions faster with less data.