Towards Active Synthetic Data Generation for Finetuning Language Models
By: Samuel Kessler , Menglin Xia , Daniel Madrigal Diaz and more
Potential Business Impact:
Teaches computers to learn better from examples.
A common and effective means for improving language model capabilities involves finetuning a ``student'' language model's parameters on generations from a more proficient ``teacher'' model. Termed ``synthetic data'', these generations are often produced before any student finetuning, but some work has considered generating new synthetic samples as training progresses. This paper studies and advocates for the latter case, where data are generated in an iterative, closed-loop fashion that is guided by the current state of the student model. For a fixed budget of generated samples, or a budget in terms of compute spent querying a teacher, we show that this curation of finetuning data affords improved student performance over static generation. Further, while there have been several LLM-specific methods proposed that operate in this regime, we find that simple, inexpensive selection criteria from the active learning literature tend to be most performant. We validate these claims across four mathematical and logical reasoning datasets using four different small language models.
Similar Papers
AutoGeTS: Knowledge-based Automated Generation of Text Synthetics for Improving Text Classification
Computation and Language
Makes AI understand text better with fake examples.
Meta-Learning and Synthetic Data for Automated Pretraining and Finetuning
Machine Learning (CS)
Helps computers learn faster with less data.
Data Value in the Age of Scaling: Understanding LLM Scaling Dynamics Under Real-Synthetic Data Mixtures
Machine Learning (CS)
Makes AI learn better from mixed data.