Score: 0

Not All Instances Are Equally Valuable: Towards Influence-Weighted Dataset Distillation

Published: October 31, 2025 | arXiv ID: 2510.27253v1

By: Qiyan Deng , Changqian Zheng , Lianpeng Qiao and more

Potential Business Impact:

Makes computer learning better by picking good data.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Dataset distillation condenses large datasets into synthetic subsets, achieving performance comparable to training on the full dataset while substantially reducing storage and computation costs. Most existing dataset distillation methods assume that all real instances contribute equally to the process. In practice, real-world datasets contain both informative and redundant or even harmful instances, and directly distilling the full dataset without considering data quality can degrade model performance. In this work, we present Influence-Weighted Distillation IWD, a principled framework that leverages influence functions to explicitly account for data quality in the distillation process. IWD assigns adaptive weights to each instance based on its estimated impact on the distillation objective, prioritizing beneficial data while downweighting less useful or harmful ones. Owing to its modular design, IWD can be seamlessly integrated into diverse dataset distillation frameworks. Our empirical results suggest that integrating IWD tends to improve the quality of distilled datasets and enhance model performance, with accuracy gains of up to 7.8%.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)