Score: 1

Curió-Edu 7B: Examining Data Selection Impacts in LLM Continued Pretraining

Published: December 14, 2025 | arXiv ID: 2512.12770v1

By: Thales Sales Almeida, Rodrigo Nogueira, Hélio Pedrini

Potential Business Impact:

Makes computer language smarter with less data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Continued pretraining extends a language model's capabilities by further exposing it to additional data, often tailored to a specific linguistic or domain context. This strategy has emerged as an efficient alternative to full retraining when adapting general-purpose models to new settings. In this work, we investigate this paradigm through Curió 7B, a 7-billion-parameter model derived from LLaMA-2 and trained on 100 billion Portuguese tokens from the ClassiCC-PT corpus - the most extensive Portuguese-specific continued-pretraining effort above the three-billion-parameter scale to date. Beyond scale, we investigate whether quantity alone suffices or whether data quality plays a decisive role in linguistic adaptation. To this end, we introduce Curió-Edu 7B, a variant trained exclusively on the educational and STEM-filtered subset of the same corpus, totaling just 10 billion tokens. Despite using only 10% of the data and 20% of the computation, Curió-Edu 7B surpasses the full-corpus model in our evaluations, demonstrating that data selection can be fundamental even when adapting models with limited prior exposure to the target language. The developed models are available at https://huggingface.co/collections/ClassiCC-Corpus/curio-edu

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Computation and Language