Score: 1

Towards Active Synthetic Data Generation for Finetuning Language Models

Published: November 30, 2025 | arXiv ID: 2512.00884v1

By: Samuel Kessler , Menglin Xia , Daniel Madrigal Diaz and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Teaches computers to learn better from examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A common and effective means for improving language model capabilities involves finetuning a ``student'' language model's parameters on generations from a more proficient ``teacher'' model. Termed ``synthetic data'', these generations are often produced before any student finetuning, but some work has considered generating new synthetic samples as training progresses. This paper studies and advocates for the latter case, where data are generated in an iterative, closed-loop fashion that is guided by the current state of the student model. For a fixed budget of generated samples, or a budget in terms of compute spent querying a teacher, we show that this curation of finetuning data affords improved student performance over static generation. Further, while there have been several LLM-specific methods proposed that operate in this regime, we find that simple, inexpensive selection criteria from the active learning literature tend to be most performant. We validate these claims across four mathematical and logical reasoning datasets using four different small language models.

Country of Origin
🇺🇸 United States

Page Count
36 pages

Category
Computer Science:
Machine Learning (CS)