Improving Romanian LLM Pretraining Data using Diversity and Quality Filtering
By: Vlad Negoita, Mihai Masala, Traian Rebedea
Potential Business Impact:
Improves computer understanding of Romanian language.
Large Language Models (LLMs) have recently exploded in popularity, often matching or outperforming human abilities on many tasks. One of the key factors in training LLMs is the availability and curation of high-quality data. Data quality is especially crucial for under-represented languages, where high-quality corpora are scarce. In this work we study the characteristics and coverage of Romanian pretraining corpora and we examine how they differ from English data. By training a lightweight multitask model on carefully LLM-annotated Romanian texts, we are able to analyze and perform multi-level filtering (e.g., educational value, topic, format) to generate high-quality pretraining datasets. Our experiments show noteworthy trends in the topics present in Romanian and English data, while also proving the effectiveness of filtering data through improved LLM pretraining performance across multiple benchmarks.
Similar Papers
Building High-Quality Datasets for Portuguese LLMs: From Common Crawl Snapshots to Industrial-Grade Corpora
Computation and Language
Builds better AI for languages other than English.
Revisiting Multilingual Data Mixtures in Language Model Pretraining
Computation and Language
Makes computers understand many languages better.
Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation
Computation and Language
Teaches computers which writing styles help them learn best.