The German Commons - 154 Billion Tokens of Openly Licensed Text for German Language Models
By: Lukas Gienapp , Christopher Schröder , Stefan Schweter and more
Potential Business Impact:
Creates open German AI that understands German text.
Large language model development relies on large-scale training corpora, yet most contain data of unclear licensing status, limiting the development of truly open models. This problem is exacerbated for non-English languages, where openly licensed text remains critically scarce. We introduce the German Commons, the largest collection of openly licensed German text to date. It compiles data from 41 sources across seven domains, encompassing legal, scientific, cultural, political, news, economic, and web text. Through systematic sourcing from established data providers with verifiable licensing, it yields 154.56 billion tokens of high-quality text for language model training. Our processing pipeline implements comprehensive quality filtering, deduplication, and text formatting fixes, ensuring consistent quality across heterogeneous text sources. All domain subsets feature licenses of at least CC-BY-SA 4.0 or equivalent, ensuring legal compliance for model training and redistribution. The German Commons therefore addresses the critical gap in openly licensed German pretraining data, and enables the development of truly open German language models. We also release code for corpus construction and data filtering tailored to German language text, rendering the German Commons fully reproducible and extensible.
Similar Papers
Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training
Computation and Language
Creates free, safe data for smart computer programs.
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text
Computation and Language
Makes smart computer programs use only legal text.
The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models
Computation and Language
Makes AI models safer by using legal data.