Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality
By: Alex Fang , Hadi Pouransari , Matt Jordan and more
Potential Business Impact:
Reusing data makes AI learn better and faster.
Data filtering has become a powerful tool for improving model performance while reducing computational cost. However, as large language model compute budgets continue to grow, the limited data volume provided by heavily filtered and deduplicated datasets will become a practical constraint. In efforts to better understand how to proceed, we study model performance at various compute budgets and across multiple pre-training datasets created through data filtering and deduplication. We find that, given appropriate modifications to the training recipe, repeating existing aggressively filtered datasets for up to ten epochs can outperform training on the ten times larger superset for a single epoch across multiple compute budget orders of magnitude. While this finding relies on repeating the dataset for many epochs, we also investigate repeats within these datasets at the document level. We find that not all documents within a dataset are equal, and we can create better datasets relative to a token budget by explicitly manipulating the counts of individual documents. We conclude by arguing that even as large language models scale, data filtering remains an important direction of research.
Similar Papers
Assessing the Role of Data Quality in Training Bilingual Language Models
Computation and Language
Improves language AI for all by cleaning training data.
The interplay between domain specialization and model size
Computation and Language
Makes AI smarter in specific jobs with less training.
Improving Romanian LLM Pretraining Data using Diversity and Quality Filtering
Computation and Language
Improves computer understanding of Romanian language.