Less is More: Adaptive Coverage for Synthetic Training Data
By: Sasan Tavakkol , Max Springer , Mohammadhossein Bateni and more
Potential Business Impact:
Makes AI learn better with less fake data.
Synthetic training data generation with Large Language Models (LLMs) like Google's Gemma and OpenAI's GPT offer a promising solution to the challenge of obtaining large, labeled datasets for training classifiers. When rapid model deployment is critical, such as in classifying emerging social media trends or combating new forms of online abuse tied to current events, the ability to generate training data is invaluable. While prior research has examined the comparability of synthetic data to human-labeled data, this study introduces a novel sampling algorithm, based on the maximum coverage problem, to select a representative subset from a synthetically generated dataset. Our results demonstrate that training a classifier on this contextually sampled subset achieves superior performance compared to training on the entire dataset. This "less is more" approach not only improves model accuracy but also reduces the volume of data required, leading to potentially more efficient model fine-tuning.
Similar Papers
Augmenting Human-Annotated Training Data with Large Language Model Generation and Distillation in Open-Response Assessment
Computation and Language
Makes computers sort text better using AI and people.
Synthetic Data Generation Using Large Language Models: Advances in Text and Code
Computation and Language
Creates fake data to train AI faster.
Data-efficient LLM Fine-tuning for Code Generation
Computation and Language
Trains computers to write better code faster.