Not All Instances Are Equally Valuable: Towards Influence-Weighted Dataset Distillation
By: Qiyan Deng , Changqian Zheng , Lianpeng Qiao and more
Potential Business Impact:
Makes computer learning better by picking good data.
Dataset distillation condenses large datasets into synthetic subsets, achieving performance comparable to training on the full dataset while substantially reducing storage and computation costs. Most existing dataset distillation methods assume that all real instances contribute equally to the process. In practice, real-world datasets contain both informative and redundant or even harmful instances, and directly distilling the full dataset without considering data quality can degrade model performance. In this work, we present Influence-Weighted Distillation IWD, a principled framework that leverages influence functions to explicitly account for data quality in the distillation process. IWD assigns adaptive weights to each instance based on its estimated impact on the distillation objective, prioritizing beneficial data while downweighting less useful or harmful ones. Owing to its modular design, IWD can be seamlessly integrated into diverse dataset distillation frameworks. Our empirical results suggest that integrating IWD tends to improve the quality of distilled datasets and enhance model performance, with accuracy gains of up to 7.8%.
Similar Papers
Efficient Data Selection at Scale via Influence Distillation
Computation and Language
Makes AI learn better and faster.
A Generative Framework for Causal Estimation via Importance-Weighted Diffusion Distillation
Machine Learning (CS)
Helps doctors pick best treatments for each person.
Knowledge Distillation with Adapted Weight
Machine Learning (CS)
Makes big computer brains smaller and smarter.