Score: 0

Less is More: Adaptive Coverage for Synthetic Training Data

Published: April 20, 2025 | arXiv ID: 2504.14508v2

By: Sasan Tavakkol , Max Springer , Mohammadhossein Bateni and more

Potential Business Impact:

Makes AI learn better with less fake data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Synthetic training data generation with Large Language Models (LLMs) like Google's Gemma and OpenAI's GPT offer a promising solution to the challenge of obtaining large, labeled datasets for training classifiers. When rapid model deployment is critical, such as in classifying emerging social media trends or combating new forms of online abuse tied to current events, the ability to generate training data is invaluable. While prior research has examined the comparability of synthetic data to human-labeled data, this study introduces a novel sampling algorithm, based on the maximum coverage problem, to select a representative subset from a synthetically generated dataset. Our results demonstrate that training a classifier on this contextually sampled subset achieves superior performance compared to training on the entire dataset. This "less is more" approach not only improves model accuracy but also reduces the volume of data required, leading to potentially more efficient model fine-tuning.

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)