Score: 1

Contrastive Decoding for Synthetic Data Generation in Low-Resource Language Modeling

Published: October 9, 2025 | arXiv ID: 2510.08245v1

By: Jannek Ulm, Kevin Du, Vésteinn Snæbjarnarson

Potential Business Impact:

Makes AI smarter by using fake text.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are trained on huge amounts of textual data, and concerns have been raised that the limits of such data may soon be reached. A potential solution is to train on synthetic data sampled from LLMs. In this work, we build on this idea and investigate the benefits of contrastive decoding for generating synthetic corpora. In a controlled setting, we experiment with sampling corpora using the relative difference between a good and bad model trained on the same original corpus of 100 million words. By amplifying the signal from a model that has better performance, we create a synthetic corpus and mix it with the original training data. Our findings show that training on a mixture of synthesized and real data improves performance on the language modeling objective and a range of downstream tasks. In particular, we see that training with a mix of synthetic data from contrastive decoding benefits tasks that require more reasoning skills, while synthetic data from traditional sampling helps more on tasks dependent on surface level linguistic capabilities.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language