Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training
By: Pierre-Carl Langlais , Carlos Rosas Hinostroza , Mattia Nee and more
Potential Business Impact:
Creates free, safe data for smart computer programs.
Large Language Models (LLMs) are pre-trained on large amounts of data from different sources and domains. These data most often contain trillions of tokens with large portions of copyrighted or proprietary content, which hinders the usage of such models under AI legislation. This raises the need for truly open pre-training data that is compliant with the data security regulations. In this paper, we introduce Common Corpus, the largest open dataset for language model pre-training. The data assembled in Common Corpus are either uncopyrighted or under permissible licenses and amount to about two trillion tokens. The dataset contains a wide variety of languages, ranging from the main European languages to low-resource ones rarely present in pre-training datasets; in addition, it includes a large portion of code data. The diversity of data sources in terms of covered domains and time periods opens up the paths for both research and entrepreneurial needs in diverse areas of knowledge. In this technical report, we present the detailed provenance of data assembling and the details of dataset filtering and curation. Being already used by such industry leaders as Anthropic and multiple LLM training projects, we believe that Common Corpus will become a critical infrastructure for open science research in LLMs.
Similar Papers
The German Commons - 154 Billion Tokens of Openly Licensed Text for German Language Models
Computation and Language
Creates open German AI that understands German text.
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text
Computation and Language
Makes smart computer programs use only legal text.
The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models
Computation and Language
Makes AI models safer by using legal data.