Improving Model Classification by Optimizing the Training Dataset
By: Morad Tukan , Loay Mualem , Eitan Netzer and more
Potential Business Impact:
Makes AI learn better and faster from less data.
In the era of data-centric AI, the ability to curate high-quality training data is as crucial as model design. Coresets offer a principled approach to data reduction, enabling efficient learning on large datasets through importance sampling. However, conventional sensitivity-based coreset construction often falls short in optimizing for classification performance metrics, e.g., $F1$ score, focusing instead on loss approximation. In this work, we present a systematic framework for tuning the coreset generation process to enhance downstream classification quality. Our method introduces new tunable parameters--including deterministic sampling, class-wise allocation, and refinement via active sampling, beyond traditional sensitivity scores. Through extensive experiments on diverse datasets and classifiers, we demonstrate that tuned coresets can significantly outperform both vanilla coresets and full dataset training on key classification metrics, offering an effective path towards better and more efficient model training.
Similar Papers
The Impact of Coreset Selection on Spurious Correlations and Group Robustness
Machine Learning (CS)
Cleans up computer learning data, reducing bias.
Predictive Coresets
Computation
Helps computers learn from huge amounts of data faster.
Coreset Selection via LLM-based Concept Bottlenecks
Machine Learning (CS)
Finds important data without training computers first.