Contrastive Decoding for Synthetic Data Generation in Low-Resource Language Modeling
By: Jannek Ulm, Kevin Du, Vésteinn Snæbjarnarson
Potential Business Impact:
Makes AI smarter by using fake text.
Large language models (LLMs) are trained on huge amounts of textual data, and concerns have been raised that the limits of such data may soon be reached. A potential solution is to train on synthetic data sampled from LLMs. In this work, we build on this idea and investigate the benefits of contrastive decoding for generating synthetic corpora. In a controlled setting, we experiment with sampling corpora using the relative difference between a good and bad model trained on the same original corpus of 100 million words. By amplifying the signal from a model that has better performance, we create a synthetic corpus and mix it with the original training data. Our findings show that training on a mixture of synthesized and real data improves performance on the language modeling objective and a range of downstream tasks. In particular, we see that training with a mix of synthetic data from contrastive decoding benefits tasks that require more reasoning skills, while synthetic data from traditional sampling helps more on tasks dependent on surface level linguistic capabilities.
Similar Papers
Synthetic Data Generation Using Large Language Models: Advances in Text and Code
Computation and Language
Creates fake data to train AI faster.
Increasing LLM Coding Capabilities through Diverse Synthetic Coding Tasks
Machine Learning (CS)
Teaches computers to code by showing thinking steps.
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers
Artificial Intelligence
Makes AI tell the truth more often.