Curió-Edu 7B: Examining Data Selection Impacts in LLM Continued Pretraining
By: Thales Sales Almeida, Rodrigo Nogueira, Hélio Pedrini
Potential Business Impact:
Makes computer language smarter with less data.
Continued pretraining extends a language model's capabilities by further exposing it to additional data, often tailored to a specific linguistic or domain context. This strategy has emerged as an efficient alternative to full retraining when adapting general-purpose models to new settings. In this work, we investigate this paradigm through Curió 7B, a 7-billion-parameter model derived from LLaMA-2 and trained on 100 billion Portuguese tokens from the ClassiCC-PT corpus - the most extensive Portuguese-specific continued-pretraining effort above the three-billion-parameter scale to date. Beyond scale, we investigate whether quantity alone suffices or whether data quality plays a decisive role in linguistic adaptation. To this end, we introduce Curió-Edu 7B, a variant trained exclusively on the educational and STEM-filtered subset of the same corpus, totaling just 10 billion tokens. Despite using only 10% of the data and 20% of the computation, Curió-Edu 7B surpasses the full-corpus model in our evaluations, demonstrating that data selection can be fundamental even when adapting models with limited prior exposure to the target language. The developed models are available at https://huggingface.co/collections/ClassiCC-Corpus/curio-edu
Similar Papers
Domain-Adaptive Continued Pre-Training of Small Language Models
Computation and Language
Makes small AI smarter with less computer power.
Building High-Quality Datasets for Portuguese LLMs: From Common Crawl Snapshots to Industrial-Grade Corpora
Computation and Language
Builds better AI for languages other than English.
Influence-driven Curriculum Learning for Pre-training on Limited Data
Computation and Language
Teaches computers to learn faster by sorting lessons.