Score: 1

Improving Model Classification by Optimizing the Training Dataset

Published: July 22, 2025 | arXiv ID: 2507.16729v1

By: Morad Tukan , Loay Mualem , Eitan Netzer and more

Potential Business Impact:

Makes AI learn better and faster from less data.

Business Areas:
A/B Testing Data and Analytics

In the era of data-centric AI, the ability to curate high-quality training data is as crucial as model design. Coresets offer a principled approach to data reduction, enabling efficient learning on large datasets through importance sampling. However, conventional sensitivity-based coreset construction often falls short in optimizing for classification performance metrics, e.g., $F1$ score, focusing instead on loss approximation. In this work, we present a systematic framework for tuning the coreset generation process to enhance downstream classification quality. Our method introduces new tunable parameters--including deterministic sampling, class-wise allocation, and refinement via active sampling, beyond traditional sensitivity scores. Through extensive experiments on diverse datasets and classifiers, we demonstrate that tuned coresets can significantly outperform both vanilla coresets and full dataset training on key classification metrics, offering an effective path towards better and more efficient model training.

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Machine Learning (CS)